<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://support.beocat.ksu.edu/BeocatDocs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mozes</id>
	<title>Beocat - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://support.beocat.ksu.edu/BeocatDocs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Mozes"/>
	<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/Docs/Special:Contributions/Mozes"/>
	<updated>2026-05-16T18:24:52Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.8</generator>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Nautilus&amp;diff=1043</id>
		<title>Nautilus</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Nautilus&amp;diff=1043"/>
		<updated>2024-10-01T22:40:52Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Nautilus */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Nautilus ==&lt;br /&gt;
To access the Nautilus namespace, first make an account at https://portal.nrp-nautilus.io/ . Once you have done so, email beocat@cs.ksu.edu and request to be added to the Beocat Nautilus namespace (ksu-nrp-cluster). Once you have received notification that you have been added to the namespace, you can continue with the following steps to get set up to use the cluster resources. &lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;SSH into headnode.beocat.ksu.edu&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;SSH into fiona (fiona hosts the kubectl tool we will use for this)&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Once on fiona, use the command ‘cd ~’ to ensure you are in your home directory. If you&lt;br /&gt;
are not, this will return you to the top level of your home directory.&amp;lt;li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;From there you will need to create a .kube directory inside of your home directory. Use&lt;br /&gt;
the command ‘mkdir ~/.kube’&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login to https://portal.nrp-nautilus.io/ using the same login previously used to create your&lt;br /&gt;
account (this will be your K-State EID login)&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;From here it is MANDATORY to read the cluster policy documentation provided by the&lt;br /&gt;
National Research Platform for the Nautilus program. You can find this here.&lt;br /&gt;
https://docs.nationalresearchplatform.org/userdocs/start/policies/ &amp;lt;/li&amp;gt;&lt;br /&gt;
a. This is to ensure we do not break any of the rules put in place by the NRP.&lt;br /&gt;
&amp;lt;li&amp;gt;Next, return to the website specified in step 5, in the top right corner of the page press&lt;br /&gt;
the “Get Config” option. &amp;lt;/li&amp;gt;&lt;br /&gt;
a. This will download a file called ‘config’&lt;br /&gt;
&amp;lt;li&amp;gt;You will need to move the file to your ~/.kube directory created in step 4.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. To do this you can copy and paste the contents through the command line&lt;br /&gt;
&amp;lt;br&amp;gt;b. You can also utilize the OpenOnDemand tool to upload the file through the web&lt;br /&gt;
interface. Information for this tool can be found here:&lt;br /&gt;
https://support.beocat.ksu.edu/Docs/OpenOnDemand&lt;br /&gt;
&amp;lt;br&amp;gt;c. You can also use other means of moving the contents to the Beocat&lt;br /&gt;
headnodes/your home directory, but these are just a few examples.&lt;br /&gt;
&amp;lt;br&amp;gt;d. NOTE: Because we added a period before the directory name it is now a hidden directory,&lt;br /&gt;
and the directory will not appear when running a normal ‘ls’, to see the directory you will&lt;br /&gt;
need to run “ls -a” or “ls -la”.&lt;br /&gt;
&amp;lt;li&amp;gt;Once you have read the required documentation, created the .kube directory in your&lt;br /&gt;
home directory, and placed the config file in the '~/.kube' directory, you are now ready to continue!&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Below is an example pod that can be used. It does not request much in the way of resources so you will likely need to change some things. Be sure to change the “name:” field&lt;br /&gt;
underneath “metadata:”. Change the text “test-pod” to “{eid}-pod” where ‘{eid}’ is your&lt;br /&gt;
K-State ID. It will look something like this “dan-pod”.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=yaml&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  name: test-pod&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - name: mypod&lt;br /&gt;
    image: ubuntu&lt;br /&gt;
    resources:&lt;br /&gt;
      limits:&lt;br /&gt;
        memory: 400Mi&lt;br /&gt;
        cpu: 100m&lt;br /&gt;
      requests:&lt;br /&gt;
        memory: 100Mi&lt;br /&gt;
        cpu: 100m&lt;br /&gt;
    command: [&amp;quot;sh&amp;quot;, &amp;quot;-c&amp;quot;, &amp;quot;echo 'Im a new pod' &amp;amp;&amp;amp; sleep infinity&amp;quot;]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Place your .yaml file in the same directory created earlier (~/.kube).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;If you are not already in the .kube directory enter the command “cd ~/.kube” to change&lt;br /&gt;
your current directory.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Now we are going to create our ‘pod’. This will request a ubuntu pc using the&lt;br /&gt;
specifications from above.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. To do this enter the command “kubectl create -f pod1.yaml” NOTE: You must be&lt;br /&gt;
in the same directory that you placed the pod1.yaml file in (in this situation, the above pod config was put into a file named pod1.yaml).&lt;br /&gt;
&amp;lt;br&amp;gt;b. If the command is successful you will see an output of “pod/{eid}-pod created”.&lt;br /&gt;
&amp;lt;li&amp;gt;You will need to wait until the container for the pod is finished creating. You can check&lt;br /&gt;
this by running “kubectl get pods”&amp;lt;/li&amp;gt;&lt;br /&gt;
a. Once you run this command, it will output all the pods currently running or being&lt;br /&gt;
created in the namespace. Look for yours among the list of pods, the name will&lt;br /&gt;
be the same name specified in step 10.&lt;br /&gt;
&amp;lt;br&amp;gt;b. Once you locate your pod, check its STATUS. If the pod says Running, then you&lt;br /&gt;
are good to proceed. If it says Container Creating, then you will need to wait just a&lt;br /&gt;
bit. It should not take long.&lt;br /&gt;
&amp;lt;li&amp;gt;You can now execute and enter the pod by running “kubectl exec -it {eid}-pod --&lt;br /&gt;
/bin/bash”. Where ‘{eid}-pod’ is the pod created in step 13/the name specified in step 10.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. Executing this command will open the pod you created and run a bash console&lt;br /&gt;
on the pod.&lt;br /&gt;
&amp;lt;br&amp;gt;b. NOTE: If you have trouble logging into the pod, and are met with a “You must be&lt;br /&gt;
logged in to the server, you can run “kubectl proxy”, and after a moment, you can&lt;br /&gt;
cancel the command with a “crtl+c”. This should remedy the error.&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional documentation for Kubernetes can be found on the Kubernetes website https://kubernetes.io/docs/home&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=1000</id>
		<title>Globus</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=1000"/>
		<updated>2024-07-17T12:24:07Z</updated>

		<summary type="html">&lt;p&gt;Mozes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Transferring Data using Globus ==&lt;br /&gt;
&lt;br /&gt;
[https://www.globus.org/ Globus] is a high-speed data transfer service. It is primarily used to transfer data between research institutions, but can also be used to transfer data between Beocat and a laptop or desktop. We suggest using Globus over other file transfer options if you are transferring large data sets. Globus also allows you to share data with those who do not have Beocat accounts.&lt;br /&gt;
&lt;br /&gt;
'''Update''' The on-campus DTN has been shut down due to security issues. Please use the off-campus (FIONA) instructions, you can find it by searching for &amp;quot;Beocat Filesystem (new)&amp;quot;. Also, Globus has updated their web interface so the video is out-of-date, but the basic process is unchanged.&lt;br /&gt;
&lt;br /&gt;
== Video Demonstration ==&lt;br /&gt;
Rather than give dozens of screenshots, here is a video demonstrating how to use Globus to transfer files to and from Beocat&lt;br /&gt;
{{#widget:YouTube|id=D0X7x7B_wQs|width=800|height=600}}&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=999</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=999"/>
		<updated>2024-06-26T01:41:27Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Loading multiple modules */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java/'&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that their [https://docs.python.org/3/library/venv.html documentation] has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# or you could use 'python -m venv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
python -m venv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that have not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=998</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=998"/>
		<updated>2024-06-26T01:39:42Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Toolchains */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java/'&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that their [https://docs.python.org/3/library/venv.html documentation] has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# or you could use 'python -m venv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
python -m venv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that has not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=995</id>
		<title>OS Change</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=995"/>
		<updated>2024-06-24T13:33:32Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* OS Change */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OS Change ==&lt;br /&gt;
On April 1 2024, we switched our Operating System from CentOS 7 to Rocky Linux 9.&lt;br /&gt;
&lt;br /&gt;
If you had compiled your own software under CentOS (with or without our modules) you will likely need to recompile it to use it with the new operating system.&lt;br /&gt;
&lt;br /&gt;
=== Using old software ===&lt;br /&gt;
Below is a script that will execute a container with all of the public software we provide under CentOS 7 from the head nodes. There may be versions of GPU-related packages missing.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script is a wrapper for our CentOS 7 based container.&lt;br /&gt;
# You would use it something like this:&lt;br /&gt;
# sbatch -C os_el9 /opt/beocat/containers/beocat_centos-7.9.wrapper.sh ./R-hello_world.sh&lt;br /&gt;
&lt;br /&gt;
# Note that you would need to provide an appropriate path for the script to to execute&lt;br /&gt;
# under the contained environnment (either a full path or a relative path), and the script&lt;br /&gt;
# would need to be executable.&lt;br /&gt;
&lt;br /&gt;
# This is meant to be a stopgap measure for those that may be reliant on older software&lt;br /&gt;
# that we will not or cannot provide under our new operating system.&lt;br /&gt;
&lt;br /&gt;
apptainer exec /opt/beocat/containers/beocat_centos-7.9.sif /bin/bash -l &amp;lt;&amp;lt;EOF&lt;br /&gt;
${@}&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you would prefer, you could put the &amp;lt;tt&amp;gt;apptainer exec&amp;lt;/tt&amp;gt; lines in your script, with the commands you would like to run between the &amp;lt;tt&amp;gt;&amp;lt;&amp;lt;EOF&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;EOF&amp;lt;/tt&amp;gt; sections.&lt;br /&gt;
&lt;br /&gt;
There will be no good way to utilize these tools with multi-node jobs, so it would be a good idea to migrate away from the CentOS 7 tools as soon as possible.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=994</id>
		<title>OS Change</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=994"/>
		<updated>2024-06-23T20:38:24Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* OS Change */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OS Change ==&lt;br /&gt;
On April 1 2024, we be switched our Operating System from CentOS 7 to Rocky Linux 9.&lt;br /&gt;
&lt;br /&gt;
If you had compiled your own software under CentOS (with or without our modules) you will likely need to recompile it to use it with the new operating system.&lt;br /&gt;
&lt;br /&gt;
=== Using old software ===&lt;br /&gt;
Below is a script that will execute a container with all of the public software we provide under CentOS 7 from the head nodes. There may be versions of GPU-related packages missing.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script is a wrapper for our CentOS 7 based container.&lt;br /&gt;
# You would use it something like this:&lt;br /&gt;
# sbatch -C os_el9 /opt/beocat/containers/beocat_centos-7.9.wrapper.sh ./R-hello_world.sh&lt;br /&gt;
&lt;br /&gt;
# Note that you would need to provide an appropriate path for the script to to execute&lt;br /&gt;
# under the contained environnment (either a full path or a relative path), and the script&lt;br /&gt;
# would need to be executable.&lt;br /&gt;
&lt;br /&gt;
# This is meant to be a stopgap measure for those that may be reliant on older software&lt;br /&gt;
# that we will not or cannot provide under our new operating system.&lt;br /&gt;
&lt;br /&gt;
apptainer exec /opt/beocat/containers/beocat_centos-7.9.sif /bin/bash -l &amp;lt;&amp;lt;EOF&lt;br /&gt;
${@}&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you would prefer, you could put the &amp;lt;tt&amp;gt;apptainer exec&amp;lt;/tt&amp;gt; lines in your script, with the commands you would like to run between the &amp;lt;tt&amp;gt;&amp;lt;&amp;lt;EOF&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;EOF&amp;lt;/tt&amp;gt; sections.&lt;br /&gt;
&lt;br /&gt;
There will be no good way to utilize these tools with multi-node jobs, so it would be a good idea to migrate away from the CentOS 7 tools as soon as possible.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OpenOnDemand&amp;diff=986</id>
		<title>OpenOnDemand</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OpenOnDemand&amp;diff=986"/>
		<updated>2024-05-31T19:20:13Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* With conda */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OpenOnDemand ==&lt;br /&gt;
OpenOnDemand is a platform for running computational tasks on a cluster from a web browser. If those&lt;br /&gt;
tasks are interactive, it provides the ability to interact with them once the task has started its execution.&lt;br /&gt;
OpenOnDemand has an &amp;quot;App&amp;quot; based plugin system for adding new types of computational tasks and&lt;br /&gt;
interactivity.&lt;br /&gt;
&lt;br /&gt;
One of the greatest benefits of this system is remote access to large machines for computational tasks, without the need for utilizing a commnand-line interface that is difficult to learn.&lt;br /&gt;
&lt;br /&gt;
Our installation is available at https://ondemand.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
=== File Management ===&lt;br /&gt;
File management can be accessed through the Files dropdown in the dashboard.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Files Dropdown.png|Files Dropdown]]&lt;br /&gt;
&lt;br /&gt;
Once you click on Home Directory, you can manage your files, upload/download/rename/edit and view.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Files Launch.png|The OpenOnDemand File management application]]&lt;br /&gt;
&lt;br /&gt;
If you're looking for a way to get your files into and out of OneDrive, Google Drive, other cloud providers you may be interested in taking a look at our documentation for [[Onedrive Data Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Job Management ===&lt;br /&gt;
A cluster isn't much of a cluster if it can't run jobs for you to lookup later. OpenOnDemand has a robust job management application builtin.&lt;br /&gt;
&lt;br /&gt;
It is accessible from the Jobs dropdown in the dashboard.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS DROPDOWN ACTIVE.png|Screenshot of the Jobs dropdown in the openondemand dashboard]]&lt;br /&gt;
==== View Active Jobs ====&lt;br /&gt;
You can view your active jobs and get more information about them from the Active Jobs option in the Jobs dropdown&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS ACTIVE.png|Active jobs app in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
==== Compose Jobs ====&lt;br /&gt;
You can create new jobs through the &amp;quot;Job Composer&amp;quot; in the jobs dropdown.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS COMPOSER NEW.png|Screenshot showing the ability to create new jobs within the ood job composer]]&lt;br /&gt;
&lt;br /&gt;
If we create a new job from a template, you're given a list of templates to use:&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS COMPOSER NEW FROM TEMPLATE.png|Screenshot of some example templates for jobs within openondemand]]&lt;br /&gt;
&lt;br /&gt;
If we choose the default template, you can run it, or edit the job script to make it do what you would like&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS COMPOSER SUBMIT.png|Screenshot showing the ability to submit or edit jobs within the Job composer in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
=== Interactive Applications ===&lt;br /&gt;
We have a number of interactive applications being made available through OpenOnDemand&lt;br /&gt;
* Beocat Desktop&lt;br /&gt;
* [https://www.comsol.com/ COMSOL]&lt;br /&gt;
* [https://www.gnu.org/software/octave/ Octave]&lt;br /&gt;
* [https://www.ks.uiuc.edu/Research/vmd/ VMD]&lt;br /&gt;
* [https://www.wolfram.com/mathematica/ Mathematica] Please note, this is from a site license limited to KSU students, faculty and staff.&lt;br /&gt;
* [https://afni.nimh.nih.gov/ AFNI]&lt;br /&gt;
* [https://coder.com/ CodeServer] is a cloud native version of VS Code that runs on the compute nodes. Useful, since VS Code's remote connections cannot be used with Beocat. Other names for VS Code may be VSCode or Visual Studio Code.&lt;br /&gt;
* [https://jupyter.org/ Jupyter]&lt;br /&gt;
* [https://www.rstudio.com/products/rstudio-server/ RStudio]&lt;br /&gt;
==== RStudio ====&lt;br /&gt;
RStudio is one of the interactive applications that we've enabled for use within OpenOnDemand&lt;br /&gt;
&lt;br /&gt;
You launch interactive apps through the &amp;quot;Interactive Apps&amp;quot; dropdown.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Interactive Apps Dropdown.png|Interactive apps dropdown in the dashboard]]&lt;br /&gt;
&lt;br /&gt;
Once you click on RStudio, you'll be brought to a page allowing you to specify requirements for your RStudio run, e.g. memory, cores, and runtime.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Interactive RStudio Launch.png|Screenshot showing the options for submitting an RStudio job in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Once the job is submitted, the scheduler will take it and run it when space is available. Once the job is running, &amp;quot;My Interactive sessions&amp;quot; page will look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Interactive RStudio Connection.png|Screenshot showing the ability to connect to an RStudio job in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
From there, you can connect to RStudio and it will bring you to a familiar interface.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood RStudio.png|Screenshot showing RStudio through OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
==== Jupyter ====&lt;br /&gt;
Like RStudio above, click on Interactive Apps and then go to Jupyter. From there, you'll have a form that allows you to specify requirements for your Jupyter run.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JUPYTER LAUNCH.png|Screenshot of options to launch Jupyter]]&lt;br /&gt;
&lt;br /&gt;
Once the job is launched it will take you to a page where you can connect to your running Jupyter service&lt;br /&gt;
&lt;br /&gt;
[[File:Ood INTERACTIVE APPS JUPYTER.png|Screenshot of connection option for jupyter]]&lt;br /&gt;
&lt;br /&gt;
It will then take you to the interface you chose, below is the JupyterLab interface:&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JUPYTER LAB.png|Screenshot of JupyterLab through OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Jupyter Kernels currently supported:&lt;br /&gt;
* Python 2&lt;br /&gt;
* Python 3&lt;br /&gt;
* R&lt;br /&gt;
* Octave&lt;br /&gt;
* Sage&lt;br /&gt;
&lt;br /&gt;
Julia support will come, but will need each user to set it up individually. There is currently a large bug centered around Julia, CentOS/RHEL, and our shared filesystem (Ceph).&lt;br /&gt;
&lt;br /&gt;
===== Extra Python libraries =====&lt;br /&gt;
====== Without extra setup ======&lt;br /&gt;
You may need to install extra python libraries to use with your Jupyter Python kernels. For instance, this is how you'd install tobler&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
!pip install --user tobler&lt;br /&gt;
&lt;br /&gt;
# Sometimes Jupyter notebook needs to then be told how to find the libraries you've installed in that manner.&lt;br /&gt;
# Your username should be put in place of the &amp;lt;PUT_YOUR_USERNAME_HERE&amp;gt; text.&lt;br /&gt;
# this will need to change if you are not using a 3.7 kernel&lt;br /&gt;
import sys&lt;br /&gt;
sys.path.append(&amp;quot;/homes/&amp;lt;PUT_YOUR_USERNAME_HERE&amp;gt;/.local/lib/python3.7/site-packages&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====== With python virtual environments ======&lt;br /&gt;
Sometimes it is useful to have separation between your various projects, for instance being able to use multiple versions of a python library in different projects.&lt;br /&gt;
&lt;br /&gt;
You can setup a virtual environment (or many) for use with our Jupyter environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# First we'll activate the ability to use the ondemand modules:&lt;br /&gt;
module use /opt/beocat/ondemand_modules&lt;br /&gt;
&lt;br /&gt;
# Then we can list the jupyter_python modules (so we can use them to create the virtual environment)&lt;br /&gt;
module list jupyter_python&lt;br /&gt;
&lt;br /&gt;
# Load the version you would like (ideally you use a jupyter_python module for this, otherwise the virtual environment itself will have to have a decent amount of jupyter libraries installed into it&lt;br /&gt;
module load jupyter_python/3.8.6-TensorFlow-2.4.1&lt;br /&gt;
&lt;br /&gt;
# If you'd like to see what libraries this actually loaded, you can check it with the following:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
# We'll create a virtual environment (activate it and install any libraries you need.&lt;br /&gt;
python -m venv --system-site-packages /homes/mozes/virtualenvs/testing_ondemand_jupyter&lt;br /&gt;
. /homes/mozes/virtualenvs/testing_ondemand_jupyter/bin/activate&lt;br /&gt;
pip install # insert needed libraries here&lt;br /&gt;
&lt;br /&gt;
# here we create a directory to hold the configuration files for telling our Jupyter environment about your virtual environment&lt;br /&gt;
mkdir -p ~/ondemand/jupyter_kernel_configs&lt;br /&gt;
&lt;br /&gt;
# Now we need to create a configuration file to instruct Jupyter to find this virtual environment&lt;br /&gt;
nano ~/ondemand/jupyter_kernel_configs/my_environment_name.sh&lt;br /&gt;
&lt;br /&gt;
# in that file should be lines like the following:&lt;br /&gt;
NAME=&amp;quot;testing_ondemand_virtualenv&amp;quot;&lt;br /&gt;
VIRTUAL_ENV=&amp;quot;/homes/mozes/virtualenvs/testing_ondemand_jupyter&amp;quot;&lt;br /&gt;
MODULES=&amp;quot;jupyter_python/3.8.6-TensorFlow-2.4.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# of course, you should provide your own name and path to the virtual environment. Please don't put spaces in the name.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Once you start a new jupyter session it  should list the new kernel option that uses your virtual environment&lt;br /&gt;
&lt;br /&gt;
====== With conda ======&lt;br /&gt;
Conda environments should automatically show up if ''conda'' is in the PATH. For them to function properly, they will need at least the ipykernel library installed into them.&lt;br /&gt;
&lt;br /&gt;
conda install -c conda-forge ipykernel&lt;br /&gt;
&lt;br /&gt;
==== Beocat Desktop ====&lt;br /&gt;
Sometimes, you just need a Desktop somewhere to run your graphical applications. This can be done through the Beocat Desktop option in the Interactive Apps dropdown on the dashboard.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD DESKTOP LAUNCH.png|Screenshot of the options to launch a graphical desktop through OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Once launched, you'll be able to connect to the desktop through vnc in the openondemand in the &amp;quot;My Interactive Sessions&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD INTERACTIVE APPS DESKTOP.png|Screenshot of VNC options for Desktop in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Once you've launched the Beocat Desktop, you can interact with it like a normal desktop through the browser.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD DESKTOP VNC.png|Screenshot of VNC Beocat Desktop]]&lt;br /&gt;
&lt;br /&gt;
=== Shell Access ===&lt;br /&gt;
Somethings, no matter how hard we try, are easier to do via the command line. OpenOnDemand also gives you a way to handle those cases via the Clusters dropdown&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Clusters Dropdown.png|A screenshot showing the clusters dropdown from the OpenOnDemand dashboard]]&lt;br /&gt;
&lt;br /&gt;
You can choose an individual headnode, if need-be, or you can choose &amp;quot;Beocat Shell Access&amp;quot; to be given one of the headnodes at random. Once chosen, you should be able to have a familiar command-line experience&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Clusters Launch.png|A Screenshot showing shell access through OpenOnDemand]]&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=982</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=982"/>
		<updated>2024-05-02T20:22:18Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* CUDA */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. 'kstat -g' will show you the GPU nodes and the jobs running on them.  To request a GPU node, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; for example to request 1 GPU for your job; if your job uses multiple nodes, the number of GPUs requested is per-node.  You can also request a given type of GPU (kstat -g -l to show types) by using &amp;lt;tt&amp;gt;--gres=gpu:geforce_gtx_1080_ti:1&amp;lt;/tt&amp;gt; for a 1080Ti GPU on the Wizards or Dwarves, &amp;lt;tt&amp;gt;--gres=gpu:quadro_gp100:1&amp;lt;/tt&amp;gt; for the P100 GPUs on Wizard20-21 that are best for 64-bit codes like Vasp.  Most of these GPU nodes are owned by various groups.  If you want access to GPU nodes and your group does not own any, we can add you to the &amp;lt;tt&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/tt&amp;gt; group that has priority on Dwarf36-39.  For more information on compiling CUDA code click on this [[CUDA]] link.&lt;br /&gt;
&lt;br /&gt;
A listing of the current types of gpus can be gathered with this command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
scontrol show nodes | grep CfgTRES | tr ',' '\n' | awk -F '[:=]' '/gres\/gpu:/ { print $2 }' | sort -u&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
At the time of this writing, that command produces this list:&lt;br /&gt;
* geforce_gtx_1080_ti&lt;br /&gt;
* geforce_rtx_2080_ti&lt;br /&gt;
* geforce_rtx_3090&lt;br /&gt;
* l40s&lt;br /&gt;
* quadro_gp100&lt;br /&gt;
* rtx_a4000&lt;br /&gt;
* rtx_a6000&lt;br /&gt;
&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
''Intra''node jobs run on many cores in the same node. These jobs can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or any programming language that has the concept of ''threads''. Often, your program will need to know how many cores you want it to use, and many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--nodes=1 --cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
''Inter''node jobs can utilize many cores on one or more nodes. Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--nodes=''n'' --ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=10 --ntasks=100&amp;lt;/tt&amp;gt; will give you a total of 100 cores across 10 nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=avx2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## There is one strict rule for guaranteeing Slurm reads all of your options:&lt;br /&gt;
## Do not put *any* lines above your resource requests that aren't either:&lt;br /&gt;
##    1) blank. (no other characters)&lt;br /&gt;
##    2) comments (lines must begin with '#')&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres=gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocat@cs.ksu.edu to see how we can assist in&lt;br /&gt;
## getting your job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance.  Those needing more storage may purchase /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up. The /fastscratch file system is a zfs host with lots of NVME drives provide much faster&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to /fastscratch, the local disk on each node, or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in a purchased /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Due to the cost, directories will be provided when we are contacted and provided with payment information.&lt;br /&gt;
&lt;br /&gt;
===Fast Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /fastscratch file system is faster than /bulk or /homes.&lt;br /&gt;
In order to use fastscratch, you first need to make a directory for yourself.  &lt;br /&gt;
Fast Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the fastscratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /fastscratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  Use 'kstat -l -h' to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&lt;br /&gt;
# Once the move is complete, the Supervisor should limit the permissions for the directory again by removing the student's access:&lt;br /&gt;
chown $USER: -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
In the past, Beocat users have been allowed to keep their&lt;br /&gt;
/homes and /bulk directories open so that any other user could&lt;br /&gt;
access files.  In order to bring Beocat into alignment with&lt;br /&gt;
State of Kansas regulations and industry norms, all users must now have their /homes /bulk /scratch and /fastscratch directories&lt;br /&gt;
locked down from other users, but can still share files and directories within their group or with individual users&lt;br /&gt;
using group and individual ACLs (Access Control Lists) which will be explained below.&lt;br /&gt;
Beocat staff will be exempted from this&lt;br /&gt;
policy as we need to work freely with all users and will manage our&lt;br /&gt;
subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory with the setacls script===&lt;br /&gt;
&lt;br /&gt;
If you do not wish to share files or directories with other users, you do not need to do anything&lt;br /&gt;
as rwx access to others has already been removed.&lt;br /&gt;
If you want to share files or directories you can either use the **setacls** script or configure&lt;br /&gt;
the ACLs (Access Control Lists) manually.&lt;br /&gt;
&lt;br /&gt;
The '''setacls -h''' will show how to use the script.&lt;br /&gt;
  &lt;br /&gt;
  Eos: setacls -h&lt;br /&gt;
  setacls [-r] [-w] [-g group] [-u user] -d /full/path/to/directory&lt;br /&gt;
  Execute pemission will always be applied, you may also choose r or w&lt;br /&gt;
  Must specify at least one group or user&lt;br /&gt;
  Must specify at least one directory, and it must be the full path&lt;br /&gt;
  Example: setacls -r -g ksu-cis-hpc -u mozes -d /homes/daveturner/shared_dir&lt;br /&gt;
&lt;br /&gt;
You can specify the permissions to be either -r for read or -w for write or you can specify both.&lt;br /&gt;
You can provide a priority group to share with, which is the same as the group used in a --partition=&lt;br /&gt;
statement in a job submission script.  You can also specify users.&lt;br /&gt;
You can specify a file or a directory to share.  If the directory is specified then all files in that&lt;br /&gt;
directory will also be shared, and all files created in the directory laster will also be shared.&lt;br /&gt;
&lt;br /&gt;
The script will set everything up for you, telling you the commands it is executing along the way,&lt;br /&gt;
then show the resulting ACLs at the end with the '''getfacl''' command.&lt;br /&gt;
&lt;br /&gt;
====Manually configuring your ACLs====&lt;br /&gt;
&lt;br /&gt;
If you want to manually configure the ACLs you can use the directions below to do what the **setacls** &lt;br /&gt;
script would do for you.&lt;br /&gt;
You first need to provide the minimum execute access to your /homes&lt;br /&gt;
or /bulk directory before sharing individual subdirectories.  Setting the ACL to execute only will allow those &lt;br /&gt;
in your group to get access to subdirectories while not including read access will mean they will not&lt;br /&gt;
be able to see other files or subdirectories on your main directory, but do keep in mind that they can still access them&lt;br /&gt;
so you may want to still lock them down manually.  Below is an example of how I would change my&lt;br /&gt;
/homes/daveturner directory to allow ksu-cis-hpc group execute access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:X /homes/daveturner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your research group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share_hpc', &lt;br /&gt;
then providing access to my ksu-cis.hpc group&lt;br /&gt;
(my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q).&lt;br /&gt;
Using -R will make these changes recursively to all files and directories in that subdirectory while changing the defaults with the setfacl -d command will ensure that files and directories created&lt;br /&gt;
later will be done so with these same ACLs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc&lt;br /&gt;
# ACLs are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share_hpc' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:ksu-cis-hpc:rwX -R share_hpc'&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
groups&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself&lt;br /&gt;
by emailing us at&lt;br /&gt;
beocat@cs.ksu.edu&lt;br /&gt;
.&lt;br /&gt;
If you want to share a directory with only a few people you can manage your ACLs using individual usernames&lt;br /&gt;
instead of with a group.&lt;br /&gt;
&lt;br /&gt;
You can use the '''getfacl''' command to see groups have access to a given directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
getfacl share_hpc&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ACLs give you great flexibility in controlling file access at the&lt;br /&gt;
group level.  Below is a more advanced example where I set up a directory to be shared with&lt;br /&gt;
my ksu-cis-hpc group, Dan's ksu-cis-dan group, and an individual user 'mozes' who I also want&lt;br /&gt;
to have write access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc_dan_mozes&lt;br /&gt;
# acls are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
getfacl share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc_dan_mozes&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  user:mozes:rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  group:ksu-cis-dan:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:user:mozes:rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:group:ksu-cis-dan:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::--x&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
cd&lt;br /&gt;
mkdir public_html&lt;br /&gt;
# Opt-in to letting the webserver access your home directory:&lt;br /&gt;
setfacl -m g:public_html:x ~/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
We have a page here dedicated to [[Globus]]&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --array specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint/Restart using DMTCP ==&lt;br /&gt;
&lt;br /&gt;
DMTCP is Distributed Multi-Threaded CheckPoint software that will checkpoint your application without modification, and&lt;br /&gt;
can be set up to automatically restart your job from the last checkpoint if for example the node you are running on fails.  &lt;br /&gt;
This has been tested successfully&lt;br /&gt;
on Beocat for some scalar and OpenMP codes, but has failed on all MPI tests so far.  We would like to encourage users to&lt;br /&gt;
try DMTCP out if their non-MPI jobs run longer than 24 hours.  If you want to try this, please contact us first since we are still&lt;br /&gt;
experimenting with DMTCP.&lt;br /&gt;
&lt;br /&gt;
The sample job submission script below shows how dmtcp_launch is used to start the application, then dmtcp_restart is used to start from a checkpoint if the job has failed and been rescheduled.&lt;br /&gt;
If you are putting this in an array script, then add the Slurm array task ID to the end of the ckeckpoint directory name&lt;br /&gt;
like &amp;lt;B&amp;gt;ckptdir=ckpt-$SLURM_ARRAY_TASK_ID&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --mem=50G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS/2016.4-foss-2017beocatb-hybrid&lt;br /&gt;
  module load DMTCP&lt;br /&gt;
  module list&lt;br /&gt;
  &lt;br /&gt;
  ckptdir=ckpt&lt;br /&gt;
  mkdir -p $ckptdir&lt;br /&gt;
  export DMTCP_CHECKPOINT_DIR=$ckptdir&lt;br /&gt;
  &lt;br /&gt;
  if ! ls -1 $ckptdir | grep -c dmtcp_restart_script &amp;gt; /dev/null&lt;br /&gt;
  then&lt;br /&gt;
     echo &amp;quot;Using dmtcp_launch to start the app the first time&amp;quot;&lt;br /&gt;
     dmtcp_launch --no-coordinator mpirun -np 1 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -nsteps 50000 -ntomp 4 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  else&lt;br /&gt;
     echo &amp;quot;Using dmtcp_restart from $ckptdir to continue from a checkpoint&amp;quot;&lt;br /&gt;
     dmtcp_restart $ckptdir/*.dmtcp&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
You will need to run several tests to verify that DMTCP is working properly with your application.&lt;br /&gt;
First, run a short test without DMTCP and another with DMTCP with the checkpoint interval set to 5 minutes&lt;br /&gt;
by adding the line &amp;lt;B&amp;gt;export DMTCP_CHECKPOINT_INTERVAL=300&amp;lt;/B&amp;gt; to your script.  Then use &amp;lt;B&amp;gt;kstat -d 1&amp;lt;/B&amp;gt; to&lt;br /&gt;
check that the memory in both runs is close to the same.  Also use this information to calculate the time &lt;br /&gt;
that each checkpoint takes.  In most cases I've seen times less than a minute for checkpointing that will normally&lt;br /&gt;
be done once each hour.  If your application is taking more time, let us know.  Sometimes this can be sped up&lt;br /&gt;
by simply turning off compression by adding the line &amp;lt;B&amp;gt;export DMTCP_GZIP=0&amp;lt;/B&amp;gt;.  Make sure to remove the&lt;br /&gt;
line where you set the checkpoint interval to 300 seconds so that the default time of once per hour will be used.&lt;br /&gt;
&lt;br /&gt;
After verifying that your code completes using DMTCP and does not take significantly more time or memory, you&lt;br /&gt;
will need to start a run then &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; it after the first checkpoint, then resubmit the same script to make &lt;br /&gt;
sure that it restarts and runs to completion.  If you are working with an array job script, the last is to try a few&lt;br /&gt;
array tasks at once to make sure there is no conflict between the jobs.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;--gres=killable:1&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;scancel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, the group gets access to a &amp;quot;partition&amp;quot; giving you priority on those nodes.&lt;br /&gt;
&lt;br /&gt;
In most situations, the scheduler will automatically add those priority partitions to the jobs as submitted. You should not need to include a partition list in your job submission.&lt;br /&gt;
&lt;br /&gt;
There are currently just a few exceptions that we will not automatically add:&lt;br /&gt;
* ksu-chem-mri.q&lt;br /&gt;
* ksu-gen-gpu.q&lt;br /&gt;
* ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
If you have access to those any of the non-automatic partitions, and have need of the resources in that partition, you can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
Otherwise, you shouldn't modify the partition line at all unless you really know what you're doing.&lt;br /&gt;
&lt;br /&gt;
== Graphical Applications ==&lt;br /&gt;
Some applications are graphical and need to have some graphical input/output. We currently accomplish this with X11 forwarding or [[OpenOnDemand]]&lt;br /&gt;
=== OpenOnDemand ===&lt;br /&gt;
[[OpenOnDemand]] is likely the easier and more performant way to run a graphical application on the cluster.&lt;br /&gt;
# visit [https://ondemand.beocat.ksu.edu/ ondemand] and login with your cluster credentials.&lt;br /&gt;
# Check the &amp;quot;Interactive Apps&amp;quot; dropdown. We may have a workflow ready for you. If not choose the desktop.&lt;br /&gt;
# Select the resources you need&lt;br /&gt;
# Select launch&lt;br /&gt;
# A job is now submitted to the cluster and once the job is started you'll see a Connect button&lt;br /&gt;
# use the app as needed. If using the desktop, start your graphical application.&lt;br /&gt;
&lt;br /&gt;
=== X11 Forwarding ===&lt;br /&gt;
==== Connecting with an X11 client ====&lt;br /&gt;
===== Windows =====&lt;br /&gt;
If you are running Windows, we recommend MobaXTerm as your file/ssh manager, this is because it is one relatively simple tool to do everything. MobaXTerm also automatically connects with X11 forwarding enabled.&lt;br /&gt;
===== Linux/OSX =====&lt;br /&gt;
Both Linux and OSX can connect in an X11 forwarding mode. Linux will have all of the tools you need installed already, OSX will need [https://www.xquartz.org/ XQuartz] installed.&lt;br /&gt;
&lt;br /&gt;
Then you will need to change your 'ssh' command slightly:&lt;br /&gt;
&lt;br /&gt;
 ssh -Y eid@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
The '''-Y''' argument tells ssh to setup X11 forwarding.&lt;br /&gt;
==== Starting an Graphical job ====&lt;br /&gt;
All graphical jobs, by design, must be interactive, so we'll use the srun command. On a headnode, we run the following:&lt;br /&gt;
 # load an X11 enabled application&lt;br /&gt;
 module load Octave&lt;br /&gt;
 # start an X11 job, sbatch arguments are accepted for srun as well, 1 node, 1 hour, 1 gb of memory&lt;br /&gt;
 srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 octave --gui&lt;br /&gt;
&lt;br /&gt;
Because these jobs are interactive, they may not be able to run at all times, depending on how busy the scheduler is at any point in time. '''--pty --x11''' are required arguments setting up the job, and '''octave --gui''' is the command to run inside the job.&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=981</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=981"/>
		<updated>2024-04-29T22:22:29Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Interested in Web-Based computational biology research? Check out Galaxy! */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of CentOS Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
==== Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] ====&lt;br /&gt;
==== Read about  [[Installed software]] and languages ====&lt;br /&gt;
==== Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] ====&lt;br /&gt;
==== Run Interactive Jobs! [[OpenOnDemand]] ====&lt;br /&gt;
==== [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] ====&lt;br /&gt;
&lt;br /&gt;
==== Big Data course on Beocat! [[BigDataOnBeocat]] ====&lt;br /&gt;
==== Interested in Web-Based computational biology research? Check out [[GalaxyDocs|Galaxy!]] ====&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [https://libera.chat/guides/connect Libera chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' in your message, and it should grab our attention. [https://web.libera.chat/#beocat Available from a web browser here.]&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you email us at beocat@cs.ksu.edu please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OpenOnDemand&amp;diff=971</id>
		<title>OpenOnDemand</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OpenOnDemand&amp;diff=971"/>
		<updated>2024-04-05T20:42:12Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* With virtualenv */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OpenOnDemand ==&lt;br /&gt;
OpenOnDemand is a platform for running computational tasks on a cluster from a web browser. If those&lt;br /&gt;
tasks are interactive, it provides the ability to interact with them once the task has started its execution.&lt;br /&gt;
OpenOnDemand has an &amp;quot;App&amp;quot; based plugin system for adding new types of computational tasks and&lt;br /&gt;
interactivity.&lt;br /&gt;
&lt;br /&gt;
One of the greatest benefits of this system is remote access to large machines for computational tasks, without the need for utilizing a commnand-line interface that is difficult to learn.&lt;br /&gt;
&lt;br /&gt;
Our installation is available at https://ondemand.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
=== File Management ===&lt;br /&gt;
File management can be accessed through the Files dropdown in the dashboard.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Files Dropdown.png|Files Dropdown]]&lt;br /&gt;
&lt;br /&gt;
Once you click on Home Directory, you can manage your files, upload/download/rename/edit and view.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Files Launch.png|The OpenOnDemand File management application]]&lt;br /&gt;
&lt;br /&gt;
If you're looking for a way to get your files into and out of OneDrive, Google Drive, other cloud providers you may be interested in taking a look at our documentation for [[Onedrive Data Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Job Management ===&lt;br /&gt;
A cluster isn't much of a cluster if it can't run jobs for you to lookup later. OpenOnDemand has a robust job management application builtin.&lt;br /&gt;
&lt;br /&gt;
It is accessible from the Jobs dropdown in the dashboard.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS DROPDOWN ACTIVE.png|Screenshot of the Jobs dropdown in the openondemand dashboard]]&lt;br /&gt;
==== View Active Jobs ====&lt;br /&gt;
You can view your active jobs and get more information about them from the Active Jobs option in the Jobs dropdown&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS ACTIVE.png|Active jobs app in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
==== Compose Jobs ====&lt;br /&gt;
You can create new jobs through the &amp;quot;Job Composer&amp;quot; in the jobs dropdown.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS COMPOSER NEW.png|Screenshot showing the ability to create new jobs within the ood job composer]]&lt;br /&gt;
&lt;br /&gt;
If we create a new job from a template, you're given a list of templates to use:&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS COMPOSER NEW FROM TEMPLATE.png|Screenshot of some example templates for jobs within openondemand]]&lt;br /&gt;
&lt;br /&gt;
If we choose the default template, you can run it, or edit the job script to make it do what you would like&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS COMPOSER SUBMIT.png|Screenshot showing the ability to submit or edit jobs within the Job composer in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
=== Interactive Applications ===&lt;br /&gt;
We have a number of interactive applications being made available through OpenOnDemand&lt;br /&gt;
* Beocat Desktop&lt;br /&gt;
* [https://www.comsol.com/ COMSOL]&lt;br /&gt;
* [https://www.gnu.org/software/octave/ Octave]&lt;br /&gt;
* [https://www.ks.uiuc.edu/Research/vmd/ VMD]&lt;br /&gt;
* [https://www.wolfram.com/mathematica/ Mathematica] Please note, this is from a site license limited to KSU students, faculty and staff.&lt;br /&gt;
* [https://afni.nimh.nih.gov/ AFNI]&lt;br /&gt;
* [https://coder.com/ CodeServer] is a cloud native version of VS Code that runs on the compute nodes. Useful, since VS Code's remote connections cannot be used with Beocat. Other names for VS Code may be VSCode or Visual Studio Code.&lt;br /&gt;
* [https://jupyter.org/ Jupyter]&lt;br /&gt;
* [https://www.rstudio.com/products/rstudio-server/ RStudio]&lt;br /&gt;
==== RStudio ====&lt;br /&gt;
RStudio is one of the interactive applications that we've enabled for use within OpenOnDemand&lt;br /&gt;
&lt;br /&gt;
You launch interactive apps through the &amp;quot;Interactive Apps&amp;quot; dropdown.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Interactive Apps Dropdown.png|Interactive apps dropdown in the dashboard]]&lt;br /&gt;
&lt;br /&gt;
Once you click on RStudio, you'll be brought to a page allowing you to specify requirements for your RStudio run, e.g. memory, cores, and runtime.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Interactive RStudio Launch.png|Screenshot showing the options for submitting an RStudio job in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Once the job is submitted, the scheduler will take it and run it when space is available. Once the job is running, &amp;quot;My Interactive sessions&amp;quot; page will look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Interactive RStudio Connection.png|Screenshot showing the ability to connect to an RStudio job in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
From there, you can connect to RStudio and it will bring you to a familiar interface.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood RStudio.png|Screenshot showing RStudio through OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
==== Jupyter ====&lt;br /&gt;
Like RStudio above, click on Interactive Apps and then go to Jupyter. From there, you'll have a form that allows you to specify requirements for your Jupyter run.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JUPYTER LAUNCH.png|Screenshot of options to launch Jupyter]]&lt;br /&gt;
&lt;br /&gt;
Once the job is launched it will take you to a page where you can connect to your running Jupyter service&lt;br /&gt;
&lt;br /&gt;
[[File:Ood INTERACTIVE APPS JUPYTER.png|Screenshot of connection option for jupyter]]&lt;br /&gt;
&lt;br /&gt;
It will then take you to the interface you chose, below is the JupyterLab interface:&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JUPYTER LAB.png|Screenshot of JupyterLab through OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Jupyter Kernels currently supported:&lt;br /&gt;
* Python 2&lt;br /&gt;
* Python 3&lt;br /&gt;
* R&lt;br /&gt;
* Octave&lt;br /&gt;
* Sage&lt;br /&gt;
&lt;br /&gt;
Julia support will come, but will need each user to set it up individually. There is currently a large bug centered around Julia, CentOS/RHEL, and our shared filesystem (Ceph).&lt;br /&gt;
&lt;br /&gt;
===== Extra Python libraries =====&lt;br /&gt;
====== Without extra setup ======&lt;br /&gt;
You may need to install extra python libraries to use with your Jupyter Python kernels. For instance, this is how you'd install tobler&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
!pip install --user tobler&lt;br /&gt;
&lt;br /&gt;
# Sometimes Jupyter notebook needs to then be told how to find the libraries you've installed in that manner.&lt;br /&gt;
# Your username should be put in place of the &amp;lt;PUT_YOUR_USERNAME_HERE&amp;gt; text.&lt;br /&gt;
# this will need to change if you are not using a 3.7 kernel&lt;br /&gt;
import sys&lt;br /&gt;
sys.path.append(&amp;quot;/homes/&amp;lt;PUT_YOUR_USERNAME_HERE&amp;gt;/.local/lib/python3.7/site-packages&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====== With python virtual environments ======&lt;br /&gt;
Sometimes it is useful to have separation between your various projects, for instance being able to use multiple versions of a python library in different projects.&lt;br /&gt;
&lt;br /&gt;
You can setup a virtual environment (or many) for use with our Jupyter environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# First we'll activate the ability to use the ondemand modules:&lt;br /&gt;
module use /opt/beocat/ondemand_modules&lt;br /&gt;
&lt;br /&gt;
# Then we can list the jupyter_python modules (so we can use them to create the virtual environment)&lt;br /&gt;
module list jupyter_python&lt;br /&gt;
&lt;br /&gt;
# Load the version you would like (ideally you use a jupyter_python module for this, otherwise the virtual environment itself will have to have a decent amount of jupyter libraries installed into it&lt;br /&gt;
module load jupyter_python/3.8.6-TensorFlow-2.4.1&lt;br /&gt;
&lt;br /&gt;
# If you'd like to see what libraries this actually loaded, you can check it with the following:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
# We'll create a virtual environment (activate it and install any libraries you need.&lt;br /&gt;
python -m venv --system-site-packages /homes/mozes/virtualenvs/testing_ondemand_jupyter&lt;br /&gt;
. /homes/mozes/virtualenvs/testing_ondemand_jupyter/bin/activate&lt;br /&gt;
pip install # insert needed libraries here&lt;br /&gt;
&lt;br /&gt;
# here we create a directory to hold the configuration files for telling our Jupyter environment about your virtual environment&lt;br /&gt;
mkdir -p ~/ondemand/jupyter_kernel_configs&lt;br /&gt;
&lt;br /&gt;
# Now we need to create a configuration file to instruct Jupyter to find this virtual environment&lt;br /&gt;
nano ~/ondemand/jupyter_kernel_configs/my_environment_name.sh&lt;br /&gt;
&lt;br /&gt;
# in that file should be lines like the following:&lt;br /&gt;
NAME=&amp;quot;testing_ondemand_virtualenv&amp;quot;&lt;br /&gt;
VIRTUAL_ENV=&amp;quot;/homes/mozes/virtualenvs/testing_ondemand_jupyter&amp;quot;&lt;br /&gt;
MODULES=&amp;quot;jupyter_python/3.8.6-TensorFlow-2.4.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# of course, you should provide your own name and path to the virtual environment. Please don't put spaces in the name.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Once you start a new jupyter session it  should list the new kernel option that uses your virtual environment&lt;br /&gt;
&lt;br /&gt;
====== With conda ======&lt;br /&gt;
Conda environments should automatically show up if ''conda'' is in the PATH.&lt;br /&gt;
&lt;br /&gt;
==== Beocat Desktop ====&lt;br /&gt;
Sometimes, you just need a Desktop somewhere to run your graphical applications. This can be done through the Beocat Desktop option in the Interactive Apps dropdown on the dashboard.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD DESKTOP LAUNCH.png|Screenshot of the options to launch a graphical desktop through OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Once launched, you'll be able to connect to the desktop through vnc in the openondemand in the &amp;quot;My Interactive Sessions&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD INTERACTIVE APPS DESKTOP.png|Screenshot of VNC options for Desktop in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Once you've launched the Beocat Desktop, you can interact with it like a normal desktop through the browser.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD DESKTOP VNC.png|Screenshot of VNC Beocat Desktop]]&lt;br /&gt;
&lt;br /&gt;
=== Shell Access ===&lt;br /&gt;
Somethings, no matter how hard we try, are easier to do via the command line. OpenOnDemand also gives you a way to handle those cases via the Clusters dropdown&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Clusters Dropdown.png|A screenshot showing the clusters dropdown from the OpenOnDemand dashboard]]&lt;br /&gt;
&lt;br /&gt;
You can choose an individual headnode, if need-be, or you can choose &amp;quot;Beocat Shell Access&amp;quot; to be given one of the headnodes at random. Once chosen, you should be able to have a familiar command-line experience&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Clusters Launch.png|A Screenshot showing shell access through OpenOnDemand]]&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=970</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=970"/>
		<updated>2024-04-05T20:40:47Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Setting up your virtual environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; fosscuda:    GNU Compiler Collection (GCC) based compiler toolchain based on FOSS with CUDA support.&lt;br /&gt;
; gmvapich2:    GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support. '''DEPRECATED'''&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; goolfc:    GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. '''DEPRECATED'''&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java/'&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that their [https://docs.python.org/3/library/venv.html documentation] has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# or you could use 'python -m venv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
python -m venv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that has not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=969</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=969"/>
		<updated>2024-04-05T20:39:37Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Spark */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; fosscuda:    GNU Compiler Collection (GCC) based compiler toolchain based on FOSS with CUDA support.&lt;br /&gt;
; gmvapich2:    GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support. '''DEPRECATED'''&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; goolfc:    GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. '''DEPRECATED'''&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java/'&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that their [https://docs.python.org/3/library/venv.html documentation] has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# or you could use 'virtualenv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
python -m venv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that has not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=968</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=968"/>
		<updated>2024-04-05T20:38:41Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* TensorFlow */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; fosscuda:    GNU Compiler Collection (GCC) based compiler toolchain based on FOSS with CUDA support.&lt;br /&gt;
; gmvapich2:    GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support. '''DEPRECATED'''&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; goolfc:    GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. '''DEPRECATED'''&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java/'&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that their [https://docs.python.org/3/library/venv.html documentation] has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# or you could use 'virtualenv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
virtualenv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that has not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=967</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=967"/>
		<updated>2024-04-05T20:37:31Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Python */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; fosscuda:    GNU Compiler Collection (GCC) based compiler toolchain based on FOSS with CUDA support.&lt;br /&gt;
; gmvapich2:    GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support. '''DEPRECATED'''&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; goolfc:    GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. '''DEPRECATED'''&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java/'&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that their [https://docs.python.org/3/library/venv.html documentation] has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# or you could use 'virtualenv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://virtualenv.pypa.io/en/stable/userguide/ virtualenv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
virtualenv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
virtualenv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that has not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=966</id>
		<title>LinuxBasics</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=966"/>
		<updated>2024-04-04T16:09:25Z</updated>

		<summary type="html">&lt;p&gt;Mozes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Disclaimer:''' This is a ''very'' large topic, and much too broad to be covered on a single support page. There are many other sites (yes, entire sites) which cover the topic in more detail. We'll link so some of them below. This page is meant to be just the essentials.&lt;br /&gt;
&lt;br /&gt;
== Logging in for the first time ==&lt;br /&gt;
To login to Beocat, you first need an &amp;quot;SSH Client&amp;quot;. [[wikipedia:Secure_Shell|SSH]] (short for &amp;quot;secure shell&amp;quot;) is a protocol that allows secure communication between two computers. We recommend the following.&lt;br /&gt;
* Windows&lt;br /&gt;
** [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY] is by far the most common SSH client, both for Beocat and in the world.&lt;br /&gt;
** [http://mobaxterm.mobatek.net/ MobaXterm] is a fairly new client with some nice features, such as being able to SCP/SFTP (see below), and running X (which isn't terribly useful on Beocat, but might be if you connect to other Linux hosts).&lt;br /&gt;
** [http://www.cygwin.com/ Cygwin] is for those that would rather be running Linux but are stuck on Windows. It's purely a text interface.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** OS-X has SSH a built-in application called &amp;quot;Terminal&amp;quot;. It's not great, but it will work for most Beocat users.&lt;br /&gt;
** [http://www.iterm2.com/#/section/home iTerm2] is the terminal application we prefer.&lt;br /&gt;
* Others&lt;br /&gt;
** There are [[wikipedia:Comparison_of_SSH_clients|many SSH clients]] for many different platforms available. While we don't have experience with many of these, any should be sufficient for access to Beocat.&lt;br /&gt;
&lt;br /&gt;
You'll need to connect your client (via the SSH protocol, if your client allows multiple protocols) to headnode.beocat.ksu.edu.&lt;br /&gt;
&lt;br /&gt;
For command-line tools, the command to connect is&lt;br /&gt;
 ssh ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
Your username is your [http://eid.ksu.edu K-State eID] name and the password is your eID password.&lt;br /&gt;
&lt;br /&gt;
'''Note:''' When you type your password, nothing shows up on the screen, not even asterisks.&lt;br /&gt;
&lt;br /&gt;
You'll know you are successfully logged in when you see a prompt that says&lt;br /&gt;
 [''username''@''machinename'' ~]$&lt;br /&gt;
where ''machinename'' is the name of the machine you've logged into (currently either 'clymene' or 'helios') and ''username'' is your eID username&lt;br /&gt;
&lt;br /&gt;
== Transferring files (SCP or SFTP) ==&lt;br /&gt;
Usually, one of the first things people want to do is to transfer files into or out of Beocat. To do so, you need to use [[wikipedia:Secure_copy|SCP]] (secure copy) or [[wikipedia:SSH_File_Transfer_Protocol|SFTP]] (SSH FTP or Secure FTP). Again, there are multiple programs that do this.&lt;br /&gt;
* Windows&lt;br /&gt;
** Putty (see above) has PSCP and PSFTP programs (both are included if you run the installer). It is a command-line interface (CLI) rather than a graphical user interface (GUI).&lt;br /&gt;
** MobaXterm (see above) has a built-in GUI SFTP client that automatically changes the directories as you change them in your SSH session.&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] (client) has an easy-to-use GUI. Be sure to use 'SFTP' mode rather than 'FTP' mode.&lt;br /&gt;
** [http://winscp.net/eng/index.php WinSCP] is another easy-to-use GUI.&lt;br /&gt;
** Cygwin (see above) has CLI scp and sftp programs.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] is also available for OS-X.&lt;br /&gt;
** Within terminal or iTerm, you can use the 'scp' or 'sftp' programs.&lt;br /&gt;
* Linux&lt;br /&gt;
** FileZilla also has a GUI linux version, in addition to the CLI tools.&lt;br /&gt;
&lt;br /&gt;
=== Using a Command-Line Interface (CLI) ===&lt;br /&gt;
You can safely ignore this section if you're using a graphical interface (GUI). We highly recommend using a GUI when first learning how to use Beocat.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;First test case&amp;lt;/u&amp;gt;: transfer a file called myfile.txt in your current folder to your home directory on Beocat. For these examples, I use bold text to show what you type and plain text to show Beocat's response&lt;br /&gt;
&lt;br /&gt;
Using SCP:&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Note the colon at the end of the 'scp' line.&lt;br /&gt;
&lt;br /&gt;
Using SFTP&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/kylehutson/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
SFTP is interactive, so this is a two-step process. First, you connect to Beocat, then you transfer the file. As long as the system gives the &amp;lt;code&amp;gt;sftp&amp;gt; &amp;lt;/code&amp;gt; prompt, you are in the sftp program, and you will remain there until you type 'exit'.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Second test case:&amp;lt;/u&amp;gt; transfer a file called myfile.txt in your current folder to a diretory named 'mydirectory' under your home directory on Beocat.&lt;br /&gt;
&lt;br /&gt;
Here we run into one of the problems with scp - there is no easy way of creating 'mydirectory' if it doesn't already exist. If it does not already exist, you must login via ssh (as seen above) and create the directory using the 'mkdir' command (see Common Linux Commands) below.&lt;br /&gt;
&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 &lt;br /&gt;
An alternative version. If the colon is immediately followed by a slash, the directory name is taken from the root, rather than your home directory. So, given that your home directory on Beocat is /homes/''username'', we could instead type&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:/homes/''username''/mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 sftp ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''mkdir mydirectory'''&lt;br /&gt;
 [Note, if this directory already exists, you will get the response &amp;quot;Couldn't create directory: Failure&amp;quot;]&lt;br /&gt;
 sftp&amp;gt; '''cd mydirectory'''&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/''username''/mydirectory/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''quit'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Third test case:&amp;lt;/u&amp;gt; copy myfile.txt from your home directory on Beocat to your current folder.&lt;br /&gt;
&lt;br /&gt;
Using scp:&lt;br /&gt;
 scp ''username''@headnode.beocat.ksu.edu:myfile.txt .&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected toheadnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''get myfile.txt'''&lt;br /&gt;
 Fetching /homes/''username''/myfile.txt to myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
== Basic Linux Commands ==&lt;br /&gt;
Again, this guide is very limited, mostly limited to directory navigation and basic file commands. [http://www.ee.surrey.ac.uk/Teaching/Unix/ Here] is a pretty decent tutorial if you want to dig deeper. If you want more, entire books have been written on the subject.&lt;br /&gt;
&lt;br /&gt;
=== The Lingo ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!''Term''&lt;br /&gt;
!''Definition''&lt;br /&gt;
|-&lt;br /&gt;
|Directory&lt;br /&gt;
|A &amp;quot;Folder&amp;quot; in Windows or OS-X terms. A location where files or other directories are stored. The current directory is sometimes represented as `.` and the parent directory can be referenced as `..`&lt;br /&gt;
|-&lt;br /&gt;
|Shell&lt;br /&gt;
|The interface or environment under which you can run commands. There is a section below on shells&lt;br /&gt;
|-&lt;br /&gt;
|SSH&lt;br /&gt;
|Secure Shell. A protocol that encrypts data and can give access to another system, usually by a username and password&lt;br /&gt;
|-&lt;br /&gt;
|SCP&lt;br /&gt;
|Secure Copy. Copying to or from a remote system using part of SSH&lt;br /&gt;
|-&lt;br /&gt;
|path&lt;br /&gt;
|The list of directories which are searched when you type the name of a program. There is a section below on this&lt;br /&gt;
|-&lt;br /&gt;
|ownership&lt;br /&gt;
|Every file and directory has an user and a group attached to it, called its owners. These affect permissions.&lt;br /&gt;
|-&lt;br /&gt;
|permissions&lt;br /&gt;
|The ability to read, write, and/or execute a file. Permissions are based on ownership&lt;br /&gt;
|-&lt;br /&gt;
|switches&lt;br /&gt;
|Modifiers to a command-line program, usually in the form of -(letter) or --``(word). Several examples are given below, such as the '-a' on the 'ls' command&lt;br /&gt;
|-&lt;br /&gt;
|pipes and redirects&lt;br /&gt;
|Changes the input (often called 'stdin') and/or output (often called stdout) to a program or a file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Linux Command Line Cheat Sheet ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+File System Navigation&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
!''Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|pwd&lt;br /&gt;
|&amp;quot;Print working directory&amp;quot;, Where am I now?&lt;br /&gt;
|&amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;/homes/mozes&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls&lt;br /&gt;
|Lists files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -lh&lt;br /&gt;
|Lists files and folders with perms size and ownership&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -lh ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;-rw-r--r--  1 mozes    mozes_users   1    Jul 13  2011 NewFile&lt;br /&gt;
drwxr-xr-x  9 mozes    mozes_users   9.0K Apr 12  2010 NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -a&lt;br /&gt;
|Lists all files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -a ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;. .. .bashrc .bash_profile .tcshrc NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cd&lt;br /&gt;
|Changes directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ..&lt;br /&gt;
|Changes to parent directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ..&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd -&lt;br /&gt;
|Changes to the previous directory you were in&lt;br /&gt;
|&amp;lt;code&amp;gt;cd -&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ~&lt;br /&gt;
|Changes to your home directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ~&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Working with files&lt;br /&gt;
|-&lt;br /&gt;
!Command'&lt;br /&gt;
!What it does&lt;br /&gt;
!Example Usage'&lt;br /&gt;
!Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|file&lt;br /&gt;
|Identifies the type of object a file is&lt;br /&gt;
|&amp;lt;code&amp;gt;file NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile: a /usr/bin/python script, ASCII text executable&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cat&lt;br /&gt;
|Prints the contents of one or more files&lt;br /&gt;
|&amp;lt;code&amp;gt;cat NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;This is line one&lt;br /&gt;
This is line two&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp&lt;br /&gt;
|copy a file&lt;br /&gt;
|&amp;lt;code&amp;gt;cp OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp -i&lt;br /&gt;
|copy a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -i OldFile NewFile}&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp -r&lt;br /&gt;
|copy a directory, including contents&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -r OldFolder NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv&lt;br /&gt;
|move, or rename, a file&lt;br /&gt;
|&amp;lt;code&amp;gt;mv OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv -i&lt;br /&gt;
|move, or rename, a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;mv -i OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm&lt;br /&gt;
|remove a file&lt;br /&gt;
|&amp;lt;code&amp;gt;rm NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rm -i&lt;br /&gt;
|remove a file, ask to be sure (useful with -r)&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -i NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;remove NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm -r&lt;br /&gt;
|remove a direcory and its contents&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -r NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mkdir&lt;br /&gt;
|creates a directory&lt;br /&gt;
|&amp;lt;code&amp;gt;mkdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rmdir&lt;br /&gt;
|removes an empty directory&lt;br /&gt;
|&amp;lt;code&amp;gt;rmdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|touch&lt;br /&gt;
|creates an empty file&lt;br /&gt;
|&amp;lt;code&amp;gt;touch TempFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Finding files and directories with [http://linux.die.net/man/1/find find]&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt;&lt;br /&gt;
| finds all files and folders within &amp;lt;directory&amp;gt;&lt;br /&gt;
| find ~/&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '&amp;lt;filename&amp;gt;'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that match &amp;lt;filename&amp;gt;&lt;br /&gt;
| find ~/ -iname 'hello.qsub'&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '*&amp;lt;partialmatch&amp;gt;*'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that partially match &amp;lt;partialmatch&amp;gt;&lt;br /&gt;
| find ~/ -iname '*.qsub*'&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Other useful commands include &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt; followed by a command name above will give you the manual page for the specified command full of many other useful options for the command. &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt; will give you an overview of the processes currently being run on the host you are connected to. &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt; allows you to page through files and see their contents using &amp;lt;PgUp&amp;gt; and &amp;lt;PgDn&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Editing Text Files ===&lt;br /&gt;
If you're new to Linux, the editor you will probably want to use is 'nano'. It works much the same as 'Notepad' in Windows or 'textedit' on OS-X. Note that you cannot use your mouse to change position within the document as you can with your local computer. You must use the arrow keys, instead.&lt;br /&gt;
&lt;br /&gt;
So, if I wanted to edit my .bashrc (as shown below), and I was already in my home directory (see above), I would type&lt;br /&gt;
 nano .bashrc&lt;br /&gt;
&lt;br /&gt;
While in nano, there is a list of actions you can take at the bottom of the screen. &amp;lt;Ctrl&amp;gt; is represented by a caret (`^`), so to exit (labeled as `^`X at the bottom of the screen), I would type &amp;lt;ctrl&amp;gt;-x. This action prompts you whether you want to save and exit (Y), lose changes and exit (N), or cancel and go back to editing (&amp;lt;ctrl&amp;gt;-c).&lt;br /&gt;
&lt;br /&gt;
If you do a significant amount of text editing in Linux, you'll probably want to switch to a more powerful editor, such as vim. The usage of vim is beyond the scope of this document. It is not at all intuitive to the beginning user, but with a little practice it becomes a much faster way of editing text files. If you're interested in using vim, [http://www.openvim.com/tutorial.html there is a nice tutorial here].&lt;br /&gt;
&lt;br /&gt;
=== Shells ===&lt;br /&gt;
==== What is a Shell? ====&lt;br /&gt;
In this case, I don't believe I can do a better job explaining shells than [[wikipedia:Shell_(computing)|this]].&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
Elsewhere at Kansas State University, the default Shell is set to tcsh. tcsh stands for &amp;quot;TENEX C SHell.&amp;quot; It is considered a replacement for csh and uses many of the same features. If you have experience with either csh or tcsh you'll probably feel right at home. This was the default shell until July of 2013. If you had an account before then, it is probably still tcsh.&lt;br /&gt;
&lt;br /&gt;
But what if you don't want or like tcsh, what can you do? Well, we have other shells available of Beocat as well.&lt;br /&gt;
==== bash ====&lt;br /&gt;
[http://www.gnu.org/software/bash/ Bash] seems to be the defacto standard shell in most Linux installs today. Bash is common and probably what most of you are used to. As of July 2013, bash is our new default shell. All new users will be set to bash initially. [https://software-carpentry.org/ Software Carpentry] teaches classes on several subjects specifically targeting researchers, including the bash shell. Their documentation is all freely available. [http://swcarpentry.github.io/shell-novice/ Here is a link to their excellent tutorial on using BASH.] Most of our documentation assumes you are using BASH.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;bash configuration files:&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This section gets into some minutiae with the way our job scheduler interacts with bash. If you're trying to solve a problem, read on, otherwise you can probably skip this section.&lt;br /&gt;
&lt;br /&gt;
Bash has 3 user configurable configuration files, &amp;lt;code&amp;gt;~/.bashrc ~/.bash_profile&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_logout&amp;lt;/code&amp;gt;. We'll look at the two more relevant ones &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Bash has 3 ways of looking at things, '''login''', '''interactive''', or '''none'''.&lt;br /&gt;
&lt;br /&gt;
Normally what happens is that shells that are '''interactive''' read &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;, shells that are '''login''' read &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. '''none''' shells read neither.&lt;br /&gt;
&lt;br /&gt;
sbatch jobs are '''login''', srun jobs are '''login+interactive''', logging into Beocat in a way that you can enter commands is '''login+interactive'''. There are very few cases that you will get '''none'''. For any session that isn't '''interactive''', your sourced files cannot output anything to the screen, or else it can break scp or sftp file transfers.&lt;br /&gt;
&lt;br /&gt;
If they are ''quiet'' statements, and you want them in all shells, you can put them in your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. If they are not ''quiet'' or they output ''anything'' to the screen, you must put them in your &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
[http://zsh.sourceforge.net/ zsh] is an alternative to bash and tcsh. It tends to support more complex features than either of the other two while using a syntax remarkably similar to bash. Unless specifically noted, when we specify '''Change your shell to bash''', &amp;lt;tt&amp;gt;zsh&amp;lt;/tt&amp;gt; should work as well.&lt;br /&gt;
&lt;br /&gt;
==== Changing Shells ====&lt;br /&gt;
Previously, we gave you the option of using a &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; to modify your shell. This is no longer supported, if you have issues with your shell/paths/environment variables we will ask you to delete your &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; file and change your shell via the method below.&lt;br /&gt;
&lt;br /&gt;
You can change your shell is via &amp;lt;code&amp;gt;chsh&amp;lt;/code&amp;gt; on either of the headnodes (clymene/helios). This does not need to be re-done if you've changed to it to your preferred shell in the past.&lt;br /&gt;
&lt;br /&gt;
Use the appropriate of the following three lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
/usr/local/bin/chsh -s bash &amp;amp;&amp;amp; bash -l&lt;br /&gt;
/usr/local/bin/chsh -s tcsh &amp;amp;&amp;amp; tcsh -l&lt;br /&gt;
/usr/local/bin/chsh -s zsh &amp;amp;&amp;amp; zsh -l&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Changing your PATH ===&lt;br /&gt;
Typically, you don't have to change your PATH, but it is useful to know what your PATH is and what it does. The PATH is the list of directories which are searched when you type the name of a program. Note that by default the current directory is NOT included in the path, so if you were wanting to run a program called MyProgram in the current directory, you could NOT simply type 'MyProgram', you would instead type &amp;lt;code&amp;gt;'./MyProgram'&amp;lt;/code&amp;gt; (where the '.' represents the current directory).&lt;br /&gt;
&lt;br /&gt;
To find your PATH, we need to identify which shell you are using. If you do not know, run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
ps | awk '/sh/ {print $4}'&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .tcshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
setenv PATH /usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== bash ====&lt;br /&gt;
You'll need to edit a file in your home directory called .bashrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=/usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .zshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=&amp;quot;/usr/local/bin:$PATH&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Ownership and Permissions ===&lt;br /&gt;
Every file and directory has a user and group associated with it. You can view ownership information by using the '-l' switch on ls. By default on Beocat, files you create have a user ownership of your username (i.e., your eID) and a group ownership of your username_users. So, if I were logged in as 'myusername' and I had a single file in my home directory called MyProgram, the result of typing 'ls -l' would be something like this:&lt;br /&gt;
 total 0&lt;br /&gt;
 -rwxr-x--- 1 myusername myusername_users 79 May 31  2011 MyProgram&lt;br /&gt;
This tells us several things.&lt;br /&gt;
* The first column ('-rwxr-x---') is permissions (covered below)&lt;br /&gt;
* The second column ('1') is the number of links to this file. You can safely ignore this (unless you're both masochistic and interested in filesystem details)&lt;br /&gt;
* The third column ('myusername') shows the user ownership&lt;br /&gt;
* The fourth column ('myusername_users') shows the group ownership&lt;br /&gt;
* The fifth column ('79') gives the size of the file in bytes&lt;br /&gt;
* The next columns ('May 31  2011'), as you have probably guessed, gives the date the file was last changed&lt;br /&gt;
* The final column ('MyProgram') is the name of the file&lt;br /&gt;
&lt;br /&gt;
So why is this interesting to us? Because whenever things ''don't'' work, it's usually because of file ownership or permissions. Looking at these often gives us some useful diagnostic information.&lt;br /&gt;
&lt;br /&gt;
The permissions field shows us who has permissions to do what with this file. It is always 10 characters. The first character (-) is usually either a '-' for a regular file or a 'd' for a directory. The next 9 characters are broken into three groups of three, with each group showing read (r), write (w), and execute (x) permissions for the owner, group, and world, in that order.&lt;br /&gt;
* The first group (rwx) shows permissions for the owner (myusername). The owner here has read, write, and execute permissions&lt;br /&gt;
* The next group (r-x) shows permissions for the group (myusername_users). The group here has read and execute permissions, but cannot write.&lt;br /&gt;
* The last group (---) shows permissions for the rest of the world. The world has no permissions to read, write, or execute.&lt;br /&gt;
&lt;br /&gt;
When you create a shell script with a text editor, and sometimes when you copy programs to Beocat via SCP, the execute flag is not set. The permissions string may look more like (-rw-r--r--). To change this, you need to give yourself permission to execute this program. This is done with the 'chmod' (change mode) command. 'chmod' can have a long and confusing syntax, but since by far the most common problem is to give yourself execute permissions, here is the command to change that:&lt;br /&gt;
 chmod u+x MyProgram&lt;br /&gt;
This changes the permissions so that the user ('u', i.e., the owner) adds ('+') execute permission ('x').&lt;br /&gt;
&lt;br /&gt;
For more complex ownership or permissions changes, please feel free to contact the Beocat staff.&lt;br /&gt;
&lt;br /&gt;
=== Access Control Lists ===&lt;br /&gt;
Access Control Lists build on our knowledge and use of basic Linux permissions, so we'll cover those again:&lt;br /&gt;
&lt;br /&gt;
Linux permissions are typically broken down to ('''r''')ead, ('''w''')rite, and e('''x''')ecute split across 3 classes of accessors.&lt;br /&gt;
&lt;br /&gt;
'''Files'''&lt;br /&gt;
; read&lt;br /&gt;
: Read the file, pretty straight forward&lt;br /&gt;
; write&lt;br /&gt;
: Write to the file, including overwrite, truncation, etc.&lt;br /&gt;
; execute&lt;br /&gt;
: Execute the file, this permission allows you to run the file.&lt;br /&gt;
&lt;br /&gt;
'''Folders'''&lt;br /&gt;
; read&lt;br /&gt;
: List the directory, (ls)&lt;br /&gt;
; write&lt;br /&gt;
: Create new files and folders in the directory.&lt;br /&gt;
; execute&lt;br /&gt;
: Pass through the directory (cd into and through).&lt;br /&gt;
&lt;br /&gt;
Those accessors are ('''u''')ser, ('''g''')roup, and ('''o''')ther.&lt;br /&gt;
; user&lt;br /&gt;
: The user would typically be the user account that created the file or folder&lt;br /&gt;
; group&lt;br /&gt;
: The group would be that accounts primary group by default or can be changed by the user to be any group that they are a member of&lt;br /&gt;
; other&lt;br /&gt;
: Other is special, other is anything that doesn't meet either of the two other critera. We typically refer to them as world permissions, as they match ''everyone'' else.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, it is that &amp;quot;Other&amp;quot; permission that is a frequent problem. You may want to share some data with a colleague, but, from a security standpoint, you also may need to make sure that only that colleague has access to the data. If you aren't in the same group as the colleague, then, under standard Linux permissions, you have no other option except making the file &amp;quot;world&amp;quot; accessible.&lt;br /&gt;
&lt;br /&gt;
This is where &amp;lt;abbr title=&amp;quot;Access Control Lists&amp;quot;&amp;gt;ACLs&amp;lt;/abbr&amp;gt; come into play. ACLs are like the standard Linux permissions, except you can apply many of them, and you can allow individual users and groups to access alongside your own.&lt;br /&gt;
&lt;br /&gt;
ACLs can also do things that standard Linux permissions can't, like setting up &amp;quot;default&amp;quot; permissions for newly created files/folders within a directory.&lt;br /&gt;
&lt;br /&gt;
One big thing to be aware of for any permissions scheme is that permissions are checked at every level in a directory hierarchy.&lt;br /&gt;
&lt;br /&gt;
# /&lt;br /&gt;
# /homes&lt;br /&gt;
# /homes/$USER&lt;br /&gt;
# /homes/$USER/$SHARE&lt;br /&gt;
&lt;br /&gt;
If at any point the accessing user is denied permission, the traversal and access attempt will stop.&lt;br /&gt;
&lt;br /&gt;
==== Example 1 ====&lt;br /&gt;
Let's say I have a file in a directory that I want the user billy to be able to read. This file is &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt;. We'll look at the current permissions of the directory tree like so:&lt;br /&gt;
&lt;br /&gt;
We'll assume everyone has requisite permissions for &amp;lt;tt&amp;gt;/&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;/homes&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
First we'll check my home directory&lt;br /&gt;
 $ getfacl -e /homes/mozes&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x                      #effective:r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 mask::r-x&lt;br /&gt;
 other::---&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
 default:other::---&lt;br /&gt;
&lt;br /&gt;
If we make it past that permissions check, we'd go one level deeper.&lt;br /&gt;
 $ getfacl -e /homes/mozes/example&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r-x&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
&lt;br /&gt;
Finally we'd check if we had permission to access the file itself:&lt;br /&gt;
 $ getfacl -e /homes/mozes/example/input.file&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rw-&lt;br /&gt;
 group::r--&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r--&lt;br /&gt;
&lt;br /&gt;
There is quite a lot of information contained in the the above output, so lets look at and attempt to understand the contents.&lt;br /&gt;
&lt;br /&gt;
First, in each section, we see the POSIX owner and group as comments prefixed by '#' characters. These are what the respective user:: and group:: lines refer to when viewing the permissions.&lt;br /&gt;
&lt;br /&gt;
Second, we have lines related to the permissions of each accessor. These do what they say, show the permissions that an accessor would be granted. Please note there is a catch here, it seems to be most specific permission wins. This can come into play with granting a certain group access and then a specific member of the group a differing level of access.&lt;br /&gt;
&lt;br /&gt;
Third, on many of them there are lines prefixed with default: and then a permissions set. Default permissions are interesting. They are only set to directories and they define the starting set of acls that should be set when new files or folders are created within that folder.&lt;br /&gt;
&lt;br /&gt;
Finally, there is a mask, I'm not covering it because there are very few cases that people would need to use them.&lt;br /&gt;
&lt;br /&gt;
Back to the task at hand, We want billy to be able to read &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt;. Checking &amp;lt;tt&amp;gt;/homes/mozes&amp;lt;/tt&amp;gt;, we see that 'other' has no permissions, and billy has not been granted any special access.&lt;br /&gt;
&lt;br /&gt;
So we grant billy access &amp;quot;through&amp;quot; &amp;lt;tt&amp;gt;/homes/mozes&amp;lt;/tt&amp;gt;, granting the smallest set of permissions would be this:&lt;br /&gt;
 $ setfacl -m u:billy:x /homes/mozes&lt;br /&gt;
&lt;br /&gt;
Note, since I didn't give billy read access to my home directory, they wouldn't be able to &amp;lt;tt&amp;gt;ls /homes/mozes&amp;lt;/tt&amp;gt;, they can simply cd into it and through it.&lt;br /&gt;
&lt;br /&gt;
Then we check the rest of the permissions, &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt; has an 'other' permission granting (r)ead and e(x)ecute, so that shouldn't be an issue. &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt; allows 'other' to read it, so our job is done. Billy has access to read my file.&lt;br /&gt;
&lt;br /&gt;
If we decide later that billy needs to write to my file, we can grant them specific read/write permissions to just that file with:&lt;br /&gt;
 $ setfacl -m u:billy:rw /homes/mozes/example/input.file&lt;br /&gt;
&lt;br /&gt;
==== Example 2 ====&lt;br /&gt;
That's all well and good, but lets say we want all of my grad students to have read/write access to my example directory.&lt;br /&gt;
&lt;br /&gt;
Looking at the acls that have been set so far:&lt;br /&gt;
 $ getfacl -e /homes/mozes&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 user:billy:--x&lt;br /&gt;
 group::r-x                      #effective:r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 mask::r-x&lt;br /&gt;
 other::---&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
 default:other::---&lt;br /&gt;
&lt;br /&gt;
 $ getfacl -e /homes/mozes/example&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r-x&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
&lt;br /&gt;
We now want to grant my group of grad students the correct permissions to &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt;&lt;br /&gt;
 $ setfacl -R -m g:my_grad_students:rw -m d:g:my_grad_students:rw -m d:u:mozes:rw /homes/mozes/example&lt;br /&gt;
&lt;br /&gt;
There are a few things to note there:&lt;br /&gt;
* We're setting multiple acls at once (note the multiple -m arguments)&lt;br /&gt;
* We're setting those permissions recursively (on all files/folders nested anywhere in that directory hierarchy). The &amp;lt;tt&amp;gt;-R&amp;lt;/tt&amp;gt; option does this.&lt;br /&gt;
* We're setting some default permissions. Default permissions are prefixed with d:. Here we're saying that the (g)roup my_grad_students should be granted read/write permissions, we aslo set a default permission for ourselves. d:u:mozes:rw grants me read/write access to those files as if I were the owner, this is nice in the event that you're not a member of the my_grad_students group, it would make sure that you still retain a reasonable baseline of access.&lt;br /&gt;
&lt;br /&gt;
That all looks good, right? Except my grad students are complaining that they can't access &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt;. What did we forget?&lt;br /&gt;
&lt;br /&gt;
Permissions are checked at every level of the directory hierarchy, and we forgot to grant my_grad_students access through my home directory.&lt;br /&gt;
 $ setfacl -m g:my_grad_students:x /homes/mozes&lt;br /&gt;
&lt;br /&gt;
=== Manual (man) pages ===&lt;br /&gt;
Most commands have a complex set of switches that will modify the amount or type of information they display. To find out what switches are available, or how a program expects data, you can use the manual pages by typing &amp;quot;`man` ''command''&amp;quot;. Using one of the most common Linux commands, take a look the output of 'man ls'. It shows that it has over 50 switches available, ranging from which files to include, to how to display file sizes, to sort order and more. (I'm not pasting it here, because it's over 200 lines long!) To navigate a 'manpage', use the up-arrow and down-arrow keys. Press 'q' to quit.&lt;br /&gt;
&lt;br /&gt;
=== Pipes and Redirects ===&lt;br /&gt;
Typically a Linux program takes data from the keyboard and outputs data to the screen. In Unix and Linux terminology, the keyboard is the default 'stdin' (pronounced &amp;quot;standard in&amp;quot;) and the screen is the default 'stdout' (pronounced &amp;quot;standard out&amp;quot;). Many times, we want to take data from somewhere else (like a file, or the output of another program) and send it to yet another location. These redirectors are:&lt;br /&gt;
{|&lt;br /&gt;
|cmd &amp;gt; filename&lt;br /&gt;
|Redirect output from cmd to filename ||&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;gt;&amp;gt; filename&lt;br /&gt;
|Redirect output from cmd and append to filename&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;lt; filename&lt;br /&gt;
|Redirect input from cmd to filename&lt;br /&gt;
|-&lt;br /&gt;
| cmd1 &amp;amp;#124; cmd2&lt;br /&gt;
| Use the output from cmd1 as the input to cmd2&lt;br /&gt;
|}&lt;br /&gt;
Here is a quick example. Let's say I have a thousands of files in a directory, and I want a list of those that end in '.sh'&lt;br /&gt;
'ls' by itself scrolls so far I can't see even a fraction of them. So, I redirect the output to a file&lt;br /&gt;
 ls &amp;gt; ~/filelist.txt&lt;br /&gt;
That gives me all the files in the current folder and saves them in my home directory in 'filelist.txt'.&lt;br /&gt;
A quick look through the file in my favorite editor tells me this is still going to take too long, so I need another step. The 'grep' program is a commonly-used program to perform pattern matching. The syntax of 'grep' is beyond the scope of this document, but take my word for it that&lt;br /&gt;
 grep '\.sh$'&lt;br /&gt;
will return all lines that end in .sh.&lt;br /&gt;
&lt;br /&gt;
We can now redirect the input from grep to the file we just created:&lt;br /&gt;
 grep '\.sh$' &amp;lt; ~/filelist.txt&lt;br /&gt;
Great! We now have our list. However, we wanted to save this as filelist.txt, and instead we have another list that we have to copy-and-paste. Instead of redirecting to a file, we'll use the vertical bar '|' (which we often term a &amp;quot;pipe&amp;quot;) to send the output of one command to another.&lt;br /&gt;
 ls | grep '\.sh$' &amp;gt; ~/filelist.txt&lt;br /&gt;
This time the output of 'ls' is ''not'' redirected to a file, but is redirected to the next command (grep).  The output of grep (which is all our .sh files) instead of being sent to the screen is redirected to the file ~/filelist.txt.&lt;br /&gt;
&lt;br /&gt;
This example is a very simple demonstration of how pipes and redirects work. Many more examples with complex structures can be found at http://www.ibm.com/developerworks/linux/library/l-lpic1-v3-103-4/index.html&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=965</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=965"/>
		<updated>2024-03-12T22:57:55Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* How do I get help? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of CentOS Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
==== Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] ====&lt;br /&gt;
==== Read about  [[Installed software]] and languages ====&lt;br /&gt;
==== Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] ====&lt;br /&gt;
==== Run Interactive Jobs! [[OpenOnDemand]] ====&lt;br /&gt;
==== [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] ====&lt;br /&gt;
&lt;br /&gt;
==== Big Data course on Beocat! [[BigDataOnBeocat]] ====&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [https://libera.chat/guides/connect Libera chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' in your message, and it should grab our attention. [https://web.libera.chat/#beocat Available from a web browser here.]&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you email us at beocat@cs.ksu.edu please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=964</id>
		<title>OS Change</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=964"/>
		<updated>2024-03-04T21:12:52Z</updated>

		<summary type="html">&lt;p&gt;Mozes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OS Change ==&lt;br /&gt;
Soon we'll be switching our Operating System from CentOS 7 to Rocky Linux 9. If you are utilizing software that we provide via the &amp;lt;tt&amp;gt;module&amp;lt;/tt&amp;gt; command, it would be a good idea to access the testbed and verify that we have the tools which you need. If they are not there, you should let us know what you need.&lt;br /&gt;
&lt;br /&gt;
One big thing that I'm not sure we've made abundantly clear enough, is that if you have compiled your own software (with or without our modules) you will likely need to recompile it to use it with the new operating system.&lt;br /&gt;
&lt;br /&gt;
=== Accessing the testbed ===&lt;br /&gt;
We have provided two new head nodes &amp;lt;tt&amp;gt;helios&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;clymene&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you have logged into beocat, you can access them with ssh. e.g. &amp;lt;tt&amp;gt;ssh helios&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They will be setup much the same way as our previous head nodes were, with access to the module command. The modules we have available under Rocky Linux 9 are available as a searchable list [https://modules.beocat.ksu.edu/rocky9/ here]&lt;br /&gt;
&lt;br /&gt;
To submit jobs to the new operating system, we have provided a new constraint so that you are able to request the OS variant. CentOS 7 hosts have the feature &amp;lt;tt&amp;gt;os_el7&amp;lt;/tt&amp;gt;, while the Rocky Linux 9 hosts provide the feature &amp;lt;tt&amp;gt;os_el9&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You would use the following in a job script to tell the scheduler that you would like the new OS:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#SBATCH -C os_el9&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We have an OpenOnDemand version of the testbed available at https://ondemand-dev.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
=== Using old software ===&lt;br /&gt;
Below is a script that will execute a container with all of the public software we provide under CentOS 7 from the head nodes. There may be versions of GPU-related packages missing.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script is a wrapper for our CentOS 7 based container.&lt;br /&gt;
# You would use it something like this:&lt;br /&gt;
# sbatch -C os_el9 /opt/beocat/containers/beocat_centos-7.9.wrapper.sh ./R-hello_world.sh&lt;br /&gt;
&lt;br /&gt;
# Note that you would need to provide an appropriate path for the script to to execute&lt;br /&gt;
# under the contained environnment (either a full path or a relative path), and the script&lt;br /&gt;
# would need to be executable.&lt;br /&gt;
&lt;br /&gt;
# This is meant to be a stopgap measure for those that may be reliant on older software&lt;br /&gt;
# that we will not or cannot provide under our new operating system.&lt;br /&gt;
&lt;br /&gt;
singularity exec /opt/beocat/containers/beocat_centos-7.9.sif /bin/bash -l &amp;lt;&amp;lt;EOF&lt;br /&gt;
${@}&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you would prefer, you could put the &amp;lt;tt&amp;gt;singularity exec&amp;lt;/tt&amp;gt; lines in your script, with the commands you would like to run between the &amp;lt;tt&amp;gt;&amp;lt;&amp;lt;EOF&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;EOF&amp;lt;/tt&amp;gt; sections.&lt;br /&gt;
&lt;br /&gt;
There will be no good way to utilize these tools with multi-node jobs, so it would be a good idea to migrate away from the CentOS 7 tools as soon as possible.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=961</id>
		<title>OS Change</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=961"/>
		<updated>2024-02-19T00:38:50Z</updated>

		<summary type="html">&lt;p&gt;Mozes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OS Change ==&lt;br /&gt;
Soon we'll be switching our Operating System from CentOS 7 to Rocky Linux 9. If you are utilizing software that we provide via the &amp;lt;tt&amp;gt;module&amp;lt;/tt&amp;gt; command, it would be a good idea to access the testbed and verify that we have the tools which you need. If they are not there, you should let us know what you need.&lt;br /&gt;
&lt;br /&gt;
One big thing that I'm not sure we've made abundantly clear enough, is that if you have compiled your own software (with or without our modules) you will likely need to recompile it to use it with the new operating system.&lt;br /&gt;
&lt;br /&gt;
=== Accessing the testbed ===&lt;br /&gt;
We have provided two new head nodes &amp;lt;tt&amp;gt;helios&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;clymene&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you have logged into beocat, you can access them with ssh. e.g. &amp;lt;tt&amp;gt;ssh helios&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They will be setup much the same way as our previous head nodes were, with access to the module command. The modules we have available under Rocky Linux 9 are available as a searchable list [here https://modules.beocat.ksu.edu/rocky9/]&lt;br /&gt;
&lt;br /&gt;
To submit jobs to the new operating system, we have provided a new constraint so that you are able to request the OS variant. CentOS 7 hosts have the feature &amp;lt;tt&amp;gt;os_el7&amp;lt;/tt&amp;gt;, while the Rocky Linux 9 hosts provide the feature &amp;lt;tt&amp;gt;os_el9&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You would use the following in a job script to tell the scheduler that you would like the new OS:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#SBATCH -C os_el9&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We have an OpenOnDemand version of the testbed available at https://ondemand-dev.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
=== Using old software ===&lt;br /&gt;
Below is a script that will execute a container with all of the public software we provide under CentOS 7 from the head nodes. There may be versions of GPU-related packages missing.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script is a wrapper for our CentOS 7 based container.&lt;br /&gt;
# You would use it something like this:&lt;br /&gt;
# sbatch -C os_el9 /opt/beocat/containers/beocat_centos-7.9.wrapper.sh ./R-hello_world.sh&lt;br /&gt;
&lt;br /&gt;
# Note that you would need to provide an appropriate path for the script to to execute&lt;br /&gt;
# under the contained environnment (either a full path or a relative path), and the script&lt;br /&gt;
# would need to be executable.&lt;br /&gt;
&lt;br /&gt;
# This is meant to be a stopgap measure for those that may be reliant on older software&lt;br /&gt;
# that we will not or cannot provide under our new operating system.&lt;br /&gt;
&lt;br /&gt;
singularity exec /opt/beocat/containers/beocat_centos-7.9.sif /bin/bash -l &amp;lt;&amp;lt;EOF&lt;br /&gt;
${@}&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you would prefer, you could put the &amp;lt;tt&amp;gt;singularity exec&amp;lt;/tt&amp;gt; lines in your script, with the commands you would like to run between the &amp;lt;tt&amp;gt;&amp;lt;&amp;lt;EOF&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;EOF&amp;lt;/tt&amp;gt; sections.&lt;br /&gt;
&lt;br /&gt;
There will be no good way to utilize these tools with multi-node jobs, so it would be a good idea to migrate away from the CentOS 7 tools as soon as possible.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=960</id>
		<title>OS Change</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=960"/>
		<updated>2024-02-19T00:36:31Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* OS Change */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OS Change ==&lt;br /&gt;
Soon we'll be switching our Operating System from CentOS 7 to Rocky Linux 9. If you are utilizing software that we provide via the &amp;lt;tt&amp;gt;module&amp;lt;/tt&amp;gt; command, it would be a good idea to access the testbed and verify that we have the tools which you need. If they are not there, you should let us know what you need.&lt;br /&gt;
&lt;br /&gt;
=== Accessing the testbed ===&lt;br /&gt;
We have provided two new head nodes &amp;lt;tt&amp;gt;helios&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;clymene&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you have logged into beocat, you can access them with ssh. e.g. &amp;lt;tt&amp;gt;ssh helios&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They will be setup much the same way as our previous head nodes were, with access to the module command. The modules we have available under Rocky Linux 9 are available as a searchable list [here https://modules.beocat.ksu.edu/rocky9/]&lt;br /&gt;
&lt;br /&gt;
To submit jobs to the new operating system, we have provided a new constraint so that you are able to request the OS variant. CentOS 7 hosts have the feature &amp;lt;tt&amp;gt;os_el7&amp;lt;/tt&amp;gt;, while the Rocky Linux 9 hosts provide the feature &amp;lt;tt&amp;gt;os_el9&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You would use the following in a job script to tell the scheduler that you would like the new OS:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#SBATCH -C os_el9&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We have an OpenOnDemand version of the testbed available at https://ondemand-dev.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
=== Using old software ===&lt;br /&gt;
Below is a script that will execute a container with all of the public software we provide under CentOS 7 from the head nodes. There may be versions of GPU-related packages missing.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script is a wrapper for our CentOS 7 based container.&lt;br /&gt;
# You would use it something like this:&lt;br /&gt;
# sbatch -C os_el9 /opt/beocat/containers/beocat_centos-7.9.wrapper.sh ./R-hello_world.sh&lt;br /&gt;
&lt;br /&gt;
# Note that you would need to provide an appropriate path for the script to to execute&lt;br /&gt;
# under the contained environnment (either a full path or a relative path), and the script&lt;br /&gt;
# would need to be executable.&lt;br /&gt;
&lt;br /&gt;
# This is meant to be a stopgap measure for those that may be reliant on older software&lt;br /&gt;
# that we will not or cannot provide under our new operating system.&lt;br /&gt;
&lt;br /&gt;
singularity exec /opt/beocat/containers/beocat_centos-7.9.sif /bin/bash -l &amp;lt;&amp;lt;EOF&lt;br /&gt;
${@}&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you would prefer, you could put the &amp;lt;tt&amp;gt;singularity exec&amp;lt;/tt&amp;gt; lines in your script, with the commands you would like to run between the &amp;lt;tt&amp;gt;&amp;lt;&amp;lt;EOF&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;EOF&amp;lt;/tt&amp;gt; sections.&lt;br /&gt;
&lt;br /&gt;
There will be no good way to utilize these tools with multi-node jobs, so it would be a good idea to migrate away from the CentOS 7 tools as soon as possible.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=959</id>
		<title>OS Change</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=959"/>
		<updated>2024-02-11T15:04:28Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* OS Change */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OS Change ==&lt;br /&gt;
Soon we'll be switching our Operating System from CentOS 7 to Rocky Linux 9. If you are utilizing software that we provide via the &amp;lt;tt&amp;gt;module&amp;lt;/tt&amp;gt; command, it would be a good idea to access the testbed and verify that we have the tools which you need. If they are not there, you should let us know what you need.&lt;br /&gt;
&lt;br /&gt;
=== Accessing the testbed ===&lt;br /&gt;
We have provided two new head nodes &amp;lt;tt&amp;gt;helios&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;clymene&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you have logged into beocat, you can access them with ssh. e.g. &amp;lt;tt&amp;gt;ssh helios&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They will be setup much the same way as our previous head nodes were, with access to the module command. The modules we have available under Rocky Linux 9 are available as a searchable list [here https://modules.beocat.ksu.edu/rocky9/]&lt;br /&gt;
&lt;br /&gt;
To submit jobs to the new operating system, we have provided a new constraint so that you are able to request the OS variant. CentOS 7 hosts have the feature &amp;lt;tt&amp;gt;os_el7&amp;lt;/tt&amp;gt;, while the Rocky Linux 9 hosts provide the feature &amp;lt;tt&amp;gt;os_el9&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You would use the following in a job script to tell the scheduler that you would like the new OS:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#SBATCH -C os_el9&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using old software ===&lt;br /&gt;
Below is a script that will execute a container with all of the public software we provide under CentOS 7 from the head nodes. There may be versions of GPU-related packages missing.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script is a wrapper for our CentOS 7 based container.&lt;br /&gt;
# You would use it something like this:&lt;br /&gt;
# sbatch -C os_el9 /opt/beocat/containers/beocat_centos-7.9.wrapper.sh ./R-hello_world.sh&lt;br /&gt;
&lt;br /&gt;
# Note that you would need to provide an appropriate path for the script to to execute&lt;br /&gt;
# under the contained environnment (either a full path or a relative path), and the script&lt;br /&gt;
# would need to be executable.&lt;br /&gt;
&lt;br /&gt;
# This is meant to be a stopgap measure for those that may be reliant on older software&lt;br /&gt;
# that we will not or cannot provide under our new operating system.&lt;br /&gt;
&lt;br /&gt;
singularity exec /opt/beocat/containers/beocat_centos-7.9.sif /bin/bash -l &amp;lt;&amp;lt;EOF&lt;br /&gt;
${@}&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you would prefer, you could put the &amp;lt;tt&amp;gt;singularity exec&amp;lt;/tt&amp;gt; lines in your script, with the commands you would like to run between the &amp;lt;tt&amp;gt;&amp;lt;&amp;lt;EOF&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;EOF&amp;lt;/tt&amp;gt; sections.&lt;br /&gt;
&lt;br /&gt;
There will be no good way to utilize these tools with multi-node jobs, so it would be a good idea to migrate away from the CentOS 7 tools as soon as possible.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=958</id>
		<title>OS Change</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=958"/>
		<updated>2024-02-11T15:01:47Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Using old software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OS Change ==&lt;br /&gt;
Soon we'll be switching our Operating System from CentOS 7 to Rocky Linux 9.&lt;br /&gt;
&lt;br /&gt;
=== Accessing the testbed ===&lt;br /&gt;
We have provided two new head nodes &amp;lt;tt&amp;gt;helios&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;clymene&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you have logged into beocat, you can access them with ssh. e.g. &amp;lt;tt&amp;gt;ssh helios&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They will be setup much the same way as our previous head nodes were, with access to the module command. The modules we have available under Rocky Linux 9 are available as a searchable list [here https://modules.beocat.ksu.edu/rocky9/]&lt;br /&gt;
&lt;br /&gt;
To submit jobs to the new operating system, we have provided a new constraint so that you are able to request the OS variant. CentOS 7 hosts have the feature &amp;lt;tt&amp;gt;os_el7&amp;lt;/tt&amp;gt;, while the Rocky Linux 9 hosts provide the feature &amp;lt;tt&amp;gt;os_el9&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You would use the following in a job script to tell the scheduler that you would like the new OS:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#SBATCH -C os_el9&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using old software ===&lt;br /&gt;
Below is a script that will execute a container with all of the public software we provide under CentOS 7 from the head nodes. There may be versions of GPU-related packages missing.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script is a wrapper for our CentOS 7 based container.&lt;br /&gt;
# You would use it something like this:&lt;br /&gt;
# sbatch -C os_el9 /opt/beocat/containers/beocat_centos-7.9.wrapper.sh ./R-hello_world.sh&lt;br /&gt;
&lt;br /&gt;
# Note that you would need to provide an appropriate path for the script to to execute&lt;br /&gt;
# under the contained environnment (either a full path or a relative path), and the script&lt;br /&gt;
# would need to be executable.&lt;br /&gt;
&lt;br /&gt;
# This is meant to be a stopgap measure for those that may be reliant on older software&lt;br /&gt;
# that we will not or cannot provide under our new operating system.&lt;br /&gt;
&lt;br /&gt;
singularity exec /opt/beocat/containers/beocat_centos-7.9.sif /bin/bash -l &amp;lt;&amp;lt;EOF&lt;br /&gt;
${@}&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you would prefer, you could put the &amp;lt;tt&amp;gt;singularity exec&amp;lt;/tt&amp;gt; lines in your script, with the commands you would like to run between the &amp;lt;tt&amp;gt;&amp;lt;&amp;lt;EOF&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;EOF&amp;lt;/tt&amp;gt; sections.&lt;br /&gt;
&lt;br /&gt;
There will be no good way to utilize these tools with multi-node jobs, so it would be a good idea to migrate away from the CentOS 7 tools as soon as possible.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=957</id>
		<title>OS Change</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OS_Change&amp;diff=957"/>
		<updated>2024-02-11T15:00:03Z</updated>

		<summary type="html">&lt;p&gt;Mozes: Created page with &amp;quot;== OS Change == Soon we'll be switching our Operating System from CentOS 7 to Rocky Linux 9.  === Accessing the testbed === We have provided two new head nodes &amp;lt;tt&amp;gt;helios&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;clymene&amp;lt;/tt&amp;gt;.  Once you have logged into beocat, you can access them with ssh. e.g. &amp;lt;tt&amp;gt;ssh helios&amp;lt;/tt&amp;gt;  They will be setup much the same way as our previous head nodes were, with access to the module command. The modules we have available under Rocky Linux 9 are available as a searchable l...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OS Change ==&lt;br /&gt;
Soon we'll be switching our Operating System from CentOS 7 to Rocky Linux 9.&lt;br /&gt;
&lt;br /&gt;
=== Accessing the testbed ===&lt;br /&gt;
We have provided two new head nodes &amp;lt;tt&amp;gt;helios&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;clymene&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Once you have logged into beocat, you can access them with ssh. e.g. &amp;lt;tt&amp;gt;ssh helios&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
They will be setup much the same way as our previous head nodes were, with access to the module command. The modules we have available under Rocky Linux 9 are available as a searchable list [here https://modules.beocat.ksu.edu/rocky9/]&lt;br /&gt;
&lt;br /&gt;
To submit jobs to the new operating system, we have provided a new constraint so that you are able to request the OS variant. CentOS 7 hosts have the feature &amp;lt;tt&amp;gt;os_el7&amp;lt;/tt&amp;gt;, while the Rocky Linux 9 hosts provide the feature &amp;lt;tt&amp;gt;os_el9&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
You would use the following in a job script to tell the scheduler that you would like the new OS:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#SBATCH -C os_el9&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Using old software ===&lt;br /&gt;
Below is a script that will execute a container with all of the public software we provide under CentOS 7 from the head nodes. There may be versions of GPU-related packages missing.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# This script is a wrapper for our CentOS 7 based container.&lt;br /&gt;
# You would use it something like this:&lt;br /&gt;
# sbatch -C os_el9 /opt/beocat/containers/beocat_centos-7.9.wrapper.sh ./R-hello_world.sh&lt;br /&gt;
&lt;br /&gt;
# Note that you would need to provide an appropriate path for the script to to execute&lt;br /&gt;
# under the contained environnment (either a full path or a relative path), and the script&lt;br /&gt;
# would need to be executable.&lt;br /&gt;
&lt;br /&gt;
# This is meant to be a stopgap measure for those that may be reliant on older software&lt;br /&gt;
# that we will not or cannot provide under our new operating system.&lt;br /&gt;
&lt;br /&gt;
singularity exec /opt/beocat/containers/beocat_centos-7.9.sif /bin/bash -l &amp;lt;&amp;lt;EOF&lt;br /&gt;
${@}&lt;br /&gt;
EOF&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you would prefer, you could put the &amp;lt;tt&amp;gt;singularity exec&amp;lt;/tt&amp;gt; lines in your script, with the commands you would like to run between the &amp;lt;tt&amp;gt;&amp;lt;&amp;lt;EOF&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;EOF&amp;lt;/tt&amp;gt; sections.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=956</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=956"/>
		<updated>2024-01-26T23:56:44Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Automating Duo Method */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If your account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
===== OpenSSH =====&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you would like to put this in your ~/.ssh/config file, it will send the environment variable whenever it is set to Beocat upon connection:&lt;br /&gt;
 Host headnode.beocat.ksu.edu&lt;br /&gt;
     HostName headnode.beocat.ksu.edu&lt;br /&gt;
     User YOUR_EID_GOES_HERE&lt;br /&gt;
     SendEnv DUO_PASSCODE&lt;br /&gt;
&lt;br /&gt;
From there you would simply do the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
export DUO_PASSCODE=push&lt;br /&gt;
ssh headnode.beocat.ksu.edu&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== PuTTY =====&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
===== MobaXTerm =====&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has excessive prompts for managing files.&lt;br /&gt;
: Filezilla opens one connection for browsing the system. Transferring files opens 1-4 additional connections when the transfers start. Once they finish, those connections disconnect. If you start additional transfers, new connections will be opened. Every one of those connections must be approved through Duo MFA on your smart device. You can adjust the number of connections that FileZilla opens for transfers if you like. &amp;lt;tt&amp;gt;File -&amp;gt; Site Manager -&amp;gt; (choose the site you're changing) -&amp;gt; Transfer Settings -&amp;gt; Limit number of simultaneous connections&amp;lt;/tt&amp;gt;.&lt;br /&gt;
: Another option is to disable processing the transfer queue, add the things to it you want to transfer and then re-enable the transfer queue. Then at least it will re-use the connections until the queue is empty.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /homes or /bulk, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Help! when logging into OnDemand I get a '400 Bad request' message ==&lt;br /&gt;
Unfortunately, there are some known issues with OnDemand and how it handles some of the complexities behind the scenes. This involves browser cookies that (occasionally) get too large and make it so you get these messages upon login.&lt;br /&gt;
&lt;br /&gt;
The only work around is to clear your browser cookies (although you can limit it to simply clearing the ksu.edu ones).&lt;br /&gt;
&lt;br /&gt;
Details for specific browsers are below&lt;br /&gt;
&lt;br /&gt;
* [https://support.mozilla.org/en-US/kb/clear-cookies-and-site-data-firefox Firefox]&lt;br /&gt;
* [https://support.microsoft.com/en-us/microsoft-edge/delete-cookies-in-microsoft-edge-63947406-40ac-c3b8-57b9-2a946a29ae09 Edge]&lt;br /&gt;
* [https://support.google.com/chrome/answer/95647?sjid=1537101898131489753-NA#zippy=%2Cdelete-cookies-from-a-site Chrome]&lt;br /&gt;
* [https://support.apple.com/guide/safari/manage-cookies-sfri11471/mac Safari]&lt;br /&gt;
* If you are using some other browser, I would recommend searching google for &amp;lt;tt&amp;gt;$browsername clear site cookies&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands. As with other permissions, the individuals will need access through every level of the directory hierarchy. [[LinuxBasics#Access_Control_Lists|It may be best to review our more in-depth topic on Access Control Lists.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=955</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=955"/>
		<updated>2023-10-31T20:56:00Z</updated>

		<summary type="html">&lt;p&gt;Mozes: web chat url no longer worked&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of CentOS Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
==== Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] ====&lt;br /&gt;
==== Read about  [[Installed software]] and languages ====&lt;br /&gt;
==== Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] ====&lt;br /&gt;
==== Run Interactive Jobs! [[OpenOnDemand]] ====&lt;br /&gt;
==== [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] ====&lt;br /&gt;
&lt;br /&gt;
==== Big Data course on Beocat! [[BigDataOnBeocat]] ====&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [https://libera.chat/guides/connect Libera chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' in your message, and it should grab our attention. [https://web.libera.chat/#beocat Available from a web browser here.]&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H4&amp;gt;&lt;br /&gt;
Again, when you email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu] please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/H4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=954</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=954"/>
		<updated>2023-10-22T14:05:36Z</updated>

		<summary type="html">&lt;p&gt;Mozes: twitter widget broken&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of CentOS Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
==== Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] ====&lt;br /&gt;
==== Read about  [[Installed software]] and languages ====&lt;br /&gt;
==== Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] ====&lt;br /&gt;
==== Run Interactive Jobs! [[OpenOnDemand]] ====&lt;br /&gt;
==== [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] ====&lt;br /&gt;
&lt;br /&gt;
==== Big Data course on Beocat! [[BigDataOnBeocat]] ====&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [https://libera.chat/guides/connect Libera chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' in your message, and it should grab our attention. [https://kiwiirc.com/nextclient/irc.libera.chat/#beocat Available from a web browser here.]&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H4&amp;gt;&lt;br /&gt;
Again, when you email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu] please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/H4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=953</id>
		<title>Globus</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=953"/>
		<updated>2023-10-19T16:49:31Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Transferring Data using Globus */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Transferring Data using Globus ==&lt;br /&gt;
&lt;br /&gt;
[https://www.globus.org/ Globus] is a high-speed data transfer service. It is primarily used to transfer data between research institutions, but can also be used to transfer data between Beocat and a laptop or desktop. We suggest using Globus over other file transfer options if you are transferring large data sets. Globus also allows you to share data with those who do not have Beocat accounts.&lt;br /&gt;
&lt;br /&gt;
'''Update''' The on-campus DTN has been shut down due to security issues. Please use the off-campus (FIONA) instructions, you can find it by searching for &amp;quot;Beocat filesystem&amp;quot;. Also, Globus has updated their web interface so the video is out-of-date, but the basic process is unchanged.&lt;br /&gt;
&lt;br /&gt;
== Video Demonstration ==&lt;br /&gt;
Rather than give dozens of screenshots, here is a video demonstrating how to use Globus to transfer files to and from Beocat&lt;br /&gt;
{{#widget:YouTube|id=D0X7x7B_wQs|width=800|height=600}}&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=952</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=952"/>
		<updated>2023-10-12T00:05:27Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Help! when I use sbatch I get an error about line breaks */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If your account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has excessive prompts for managing files.&lt;br /&gt;
: Filezilla opens one connection for browsing the system. Transferring files opens 1-4 additional connections when the transfers start. Once they finish, those connections disconnect. If you start additional transfers, new connections will be opened. Every one of those connections must be approved through Duo MFA on your smart device. You can adjust the number of connections that FileZilla opens for transfers if you like. &amp;lt;tt&amp;gt;File -&amp;gt; Site Manager -&amp;gt; (choose the site you're changing) -&amp;gt; Transfer Settings -&amp;gt; Limit number of simultaneous connections&amp;lt;/tt&amp;gt;.&lt;br /&gt;
: Another option is to disable processing the transfer queue, add the things to it you want to transfer and then re-enable the transfer queue. Then at least it will re-use the connections until the queue is empty.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /homes or /bulk, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Help! when logging into OnDemand I get a '400 Bad request' message ==&lt;br /&gt;
Unfortunately, there are some known issues with OnDemand and how it handles some of the complexities behind the scenes. This involves browser cookies that (occasionally) get too large and make it so you get these messages upon login.&lt;br /&gt;
&lt;br /&gt;
The only work around is to clear your browser cookies (although you can limit it to simply clearing the ksu.edu ones).&lt;br /&gt;
&lt;br /&gt;
Details for specific browsers are below&lt;br /&gt;
&lt;br /&gt;
* [https://support.mozilla.org/en-US/kb/clear-cookies-and-site-data-firefox Firefox]&lt;br /&gt;
* [https://support.microsoft.com/en-us/microsoft-edge/delete-cookies-in-microsoft-edge-63947406-40ac-c3b8-57b9-2a946a29ae09 Edge]&lt;br /&gt;
* [https://support.google.com/chrome/answer/95647?sjid=1537101898131489753-NA#zippy=%2Cdelete-cookies-from-a-site Chrome]&lt;br /&gt;
* [https://support.apple.com/guide/safari/manage-cookies-sfri11471/mac Safari]&lt;br /&gt;
* If you are using some other browser, I would recommend searching google for &amp;lt;tt&amp;gt;$browsername clear site cookies&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands. As with other permissions, the individuals will need access through every level of the directory hierarchy. [[LinuxBasics#Access_Control_Lists|It may be best to review our more in-depth topic on Access Control Lists.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=951</id>
		<title>LinuxBasics</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=951"/>
		<updated>2023-09-14T14:41:48Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Example 1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Disclaimer:''' This is a ''very'' large topic, and much too broad to be covered on a single support page. There are many other sites (yes, entire sites) which cover the topic in more detail. We'll link so some of them below. This page is meant to be just the essentials.&lt;br /&gt;
&lt;br /&gt;
== Logging in for the first time ==&lt;br /&gt;
To login to Beocat, you first need an &amp;quot;SSH Client&amp;quot;. [[wikipedia:Secure_Shell|SSH]] (short for &amp;quot;secure shell&amp;quot;) is a protocol that allows secure communication between two computers. We recommend the following.&lt;br /&gt;
* Windows&lt;br /&gt;
** [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY] is by far the most common SSH client, both for Beocat and in the world.&lt;br /&gt;
** [http://mobaxterm.mobatek.net/ MobaXterm] is a fairly new client with some nice features, such as being able to SCP/SFTP (see below), and running X (which isn't terribly useful on Beocat, but might be if you connect to other Linux hosts).&lt;br /&gt;
** [http://www.cygwin.com/ Cygwin] is for those that would rather be running Linux but are stuck on Windows. It's purely a text interface.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** OS-X has SSH a built-in application called &amp;quot;Terminal&amp;quot;. It's not great, but it will work for most Beocat users.&lt;br /&gt;
** [http://www.iterm2.com/#/section/home iTerm2] is the terminal application we prefer.&lt;br /&gt;
* Others&lt;br /&gt;
** There are [[wikipedia:Comparison_of_SSH_clients|many SSH clients]] for many different platforms available. While we don't have experience with many of these, any should be sufficient for access to Beocat.&lt;br /&gt;
&lt;br /&gt;
You'll need to connect your client (via the SSH protocol, if your client allows multiple protocols) to headnode.beocat.ksu.edu.&lt;br /&gt;
&lt;br /&gt;
For command-line tools, the command to connect is&lt;br /&gt;
 ssh ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
Your username is your [http://eid.ksu.edu K-State eID] name and the password is your eID password.&lt;br /&gt;
&lt;br /&gt;
'''Note:''' When you type your password, nothing shows up on the screen, not even asterisks.&lt;br /&gt;
&lt;br /&gt;
You'll know you are successfully logged in when you see a prompt that says&lt;br /&gt;
 [''username''@''machinename'' ~]$&lt;br /&gt;
where ''machinename'' is the name of the machine you've logged into (currently either 'eos' or 'selene') and ''username'' is your eID username&lt;br /&gt;
&lt;br /&gt;
== Transferring files (SCP or SFTP) ==&lt;br /&gt;
Usually, one of the first things people want to do is to transfer files into or out of Beocat. To do so, you need to use [[wikipedia:Secure_copy|SCP]] (secure copy) or [[wikipedia:SSH_File_Transfer_Protocol|SFTP]] (SSH FTP or Secure FTP). Again, there are multiple programs that do this.&lt;br /&gt;
* Windows&lt;br /&gt;
** Putty (see above) has PSCP and PSFTP programs (both are included if you run the installer). It is a command-line interface (CLI) rather than a graphical user interface (GUI).&lt;br /&gt;
** MobaXterm (see above) has a built-in GUI SFTP client that automatically changes the directories as you change them in your SSH session.&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] (client) has an easy-to-use GUI. Be sure to use 'SFTP' mode rather than 'FTP' mode.&lt;br /&gt;
** [http://winscp.net/eng/index.php WinSCP] is another easy-to-use GUI.&lt;br /&gt;
** Cygwin (see above) has CLI scp and sftp programs.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] is also available for OS-X.&lt;br /&gt;
** Within terminal or iTerm, you can use the 'scp' or 'sftp' programs.&lt;br /&gt;
* Linux&lt;br /&gt;
** FileZilla also has a GUI linux version, in addition to the CLI tools.&lt;br /&gt;
&lt;br /&gt;
=== Using a Command-Line Interface (CLI) ===&lt;br /&gt;
You can safely ignore this section if you're using a graphical interface (GUI). We highly recommend using a GUI when first learning how to use Beocat.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;First test case&amp;lt;/u&amp;gt;: transfer a file called myfile.txt in your current folder to your home directory on Beocat. For these examples, I use bold text to show what you type and plain text to show Beocat's response&lt;br /&gt;
&lt;br /&gt;
Using SCP:&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Note the colon at the end of the 'scp' line.&lt;br /&gt;
&lt;br /&gt;
Using SFTP&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/kylehutson/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
SFTP is interactive, so this is a two-step process. First, you connect to Beocat, then you transfer the file. As long as the system gives the &amp;lt;code&amp;gt;sftp&amp;gt; &amp;lt;/code&amp;gt; prompt, you are in the sftp program, and you will remain there until you type 'exit'.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Second test case:&amp;lt;/u&amp;gt; transfer a file called myfile.txt in your current folder to a diretory named 'mydirectory' under your home directory on Beocat.&lt;br /&gt;
&lt;br /&gt;
Here we run into one of the problems with scp - there is no easy way of creating 'mydirectory' if it doesn't already exist. If it does not already exist, you must login via ssh (as seen above) and create the directory using the 'mkdir' command (see Common Linux Commands) below.&lt;br /&gt;
&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 &lt;br /&gt;
An alternative version. If the colon is immediately followed by a slash, the directory name is taken from the root, rather than your home directory. So, given that your home directory on Beocat is /homes/''username'', we could instead type&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:/homes/''username''/mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 sftp ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''mkdir mydirectory'''&lt;br /&gt;
 [Note, if this directory already exists, you will get the response &amp;quot;Couldn't create directory: Failure&amp;quot;]&lt;br /&gt;
 sftp&amp;gt; '''cd mydirectory'''&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/''username''/mydirectory/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''quit'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Third test case:&amp;lt;/u&amp;gt; copy myfile.txt from your home directory on Beocat to your current folder.&lt;br /&gt;
&lt;br /&gt;
Using scp:&lt;br /&gt;
 scp ''username''@headnode.beocat.ksu.edu:myfile.txt .&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected toheadnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''get myfile.txt'''&lt;br /&gt;
 Fetching /homes/''username''/myfile.txt to myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
== Basic Linux Commands ==&lt;br /&gt;
Again, this guide is very limited, mostly limited to directory navigation and basic file commands. [http://www.ee.surrey.ac.uk/Teaching/Unix/ Here] is a pretty decent tutorial if you want to dig deeper. If you want more, entire books have been written on the subject.&lt;br /&gt;
&lt;br /&gt;
=== The Lingo ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!''Term''&lt;br /&gt;
!''Definition''&lt;br /&gt;
|-&lt;br /&gt;
|Directory&lt;br /&gt;
|A &amp;quot;Folder&amp;quot; in Windows or OS-X terms. A location where files or other directories are stored. The current directory is sometimes represented as `.` and the parent directory can be referenced as `..`&lt;br /&gt;
|-&lt;br /&gt;
|Shell&lt;br /&gt;
|The interface or environment under which you can run commands. There is a section below on shells&lt;br /&gt;
|-&lt;br /&gt;
|SSH&lt;br /&gt;
|Secure Shell. A protocol that encrypts data and can give access to another system, usually by a username and password&lt;br /&gt;
|-&lt;br /&gt;
|SCP&lt;br /&gt;
|Secure Copy. Copying to or from a remote system using part of SSH&lt;br /&gt;
|-&lt;br /&gt;
|path&lt;br /&gt;
|The list of directories which are searched when you type the name of a program. There is a section below on this&lt;br /&gt;
|-&lt;br /&gt;
|ownership&lt;br /&gt;
|Every file and directory has an user and a group attached to it, called its owners. These affect permissions.&lt;br /&gt;
|-&lt;br /&gt;
|permissions&lt;br /&gt;
|The ability to read, write, and/or execute a file. Permissions are based on ownership&lt;br /&gt;
|-&lt;br /&gt;
|switches&lt;br /&gt;
|Modifiers to a command-line program, usually in the form of -(letter) or --``(word). Several examples are given below, such as the '-a' on the 'ls' command&lt;br /&gt;
|-&lt;br /&gt;
|pipes and redirects&lt;br /&gt;
|Changes the input (often called 'stdin') and/or output (often called stdout) to a program or a file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Linux Command Line Cheat Sheet ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+File System Navigation&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
!''Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|pwd&lt;br /&gt;
|&amp;quot;Print working directory&amp;quot;, Where am I now?&lt;br /&gt;
|&amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;/homes/mozes&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls&lt;br /&gt;
|Lists files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -lh&lt;br /&gt;
|Lists files and folders with perms size and ownership&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -lh ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;-rw-r--r--  1 mozes    mozes_users   1    Jul 13  2011 NewFile&lt;br /&gt;
drwxr-xr-x  9 mozes    mozes_users   9.0K Apr 12  2010 NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -a&lt;br /&gt;
|Lists all files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -a ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;. .. .bashrc .bash_profile .tcshrc NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cd&lt;br /&gt;
|Changes directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ..&lt;br /&gt;
|Changes to parent directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ..&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd -&lt;br /&gt;
|Changes to the previous directory you were in&lt;br /&gt;
|&amp;lt;code&amp;gt;cd -&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ~&lt;br /&gt;
|Changes to your home directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ~&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Working with files&lt;br /&gt;
|-&lt;br /&gt;
!Command'&lt;br /&gt;
!What it does&lt;br /&gt;
!Example Usage'&lt;br /&gt;
!Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|file&lt;br /&gt;
|Identifies the type of object a file is&lt;br /&gt;
|&amp;lt;code&amp;gt;file NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile: a /usr/bin/python script, ASCII text executable&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cat&lt;br /&gt;
|Prints the contents of one or more files&lt;br /&gt;
|&amp;lt;code&amp;gt;cat NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;This is line one&lt;br /&gt;
This is line two&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp&lt;br /&gt;
|copy a file&lt;br /&gt;
|&amp;lt;code&amp;gt;cp OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp -i&lt;br /&gt;
|copy a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -i OldFile NewFile}&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp -r&lt;br /&gt;
|copy a directory, including contents&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -r OldFolder NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv&lt;br /&gt;
|move, or rename, a file&lt;br /&gt;
|&amp;lt;code&amp;gt;mv OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv -i&lt;br /&gt;
|move, or rename, a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;mv -i OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm&lt;br /&gt;
|remove a file&lt;br /&gt;
|&amp;lt;code&amp;gt;rm NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rm -i&lt;br /&gt;
|remove a file, ask to be sure (useful with -r)&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -i NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;remove NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm -r&lt;br /&gt;
|remove a direcory and its contents&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -r NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mkdir&lt;br /&gt;
|creates a directory&lt;br /&gt;
|&amp;lt;code&amp;gt;mkdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rmdir&lt;br /&gt;
|removes an empty directory&lt;br /&gt;
|&amp;lt;code&amp;gt;rmdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|touch&lt;br /&gt;
|creates an empty file&lt;br /&gt;
|&amp;lt;code&amp;gt;touch TempFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Finding files and directories with [http://linux.die.net/man/1/find find]&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt;&lt;br /&gt;
| finds all files and folders within &amp;lt;directory&amp;gt;&lt;br /&gt;
| find ~/&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '&amp;lt;filename&amp;gt;'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that match &amp;lt;filename&amp;gt;&lt;br /&gt;
| find ~/ -iname 'hello.qsub'&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '*&amp;lt;partialmatch&amp;gt;*'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that partially match &amp;lt;partialmatch&amp;gt;&lt;br /&gt;
| find ~/ -iname '*.qsub*'&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Other useful commands include &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt; followed by a command name above will give you the manual page for the specified command full of many other useful options for the command. &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt; will give you an overview of the processes currently being run on the host you are connected to. &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt; allows you to page through files and see their contents using &amp;lt;PgUp&amp;gt; and &amp;lt;PgDn&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Editing Text Files ===&lt;br /&gt;
If you're new to Linux, the editor you will probably want to use is 'nano'. It works much the same as 'Notepad' in Windows or 'textedit' on OS-X. Note that you cannot use your mouse to change position within the document as you can with your local computer. You must use the arrow keys, instead.&lt;br /&gt;
&lt;br /&gt;
So, if I wanted to edit my .bashrc (as shown below), and I was already in my home directory (see above), I would type&lt;br /&gt;
 nano .bashrc&lt;br /&gt;
&lt;br /&gt;
While in nano, there is a list of actions you can take at the bottom of the screen. &amp;lt;Ctrl&amp;gt; is represented by a caret (`^`), so to exit (labeled as `^`X at the bottom of the screen), I would type &amp;lt;ctrl&amp;gt;-x. This action prompts you whether you want to save and exit (Y), lose changes and exit (N), or cancel and go back to editing (&amp;lt;ctrl&amp;gt;-c).&lt;br /&gt;
&lt;br /&gt;
If you do a significant amount of text editing in Linux, you'll probably want to switch to a more powerful editor, such as vim. The usage of vim is beyond the scope of this document. It is not at all intuitive to the beginning user, but with a little practice it becomes a much faster way of editing text files. If you're interested in using vim, [http://www.openvim.com/tutorial.html there is a nice tutorial here].&lt;br /&gt;
&lt;br /&gt;
=== Shells ===&lt;br /&gt;
==== What is a Shell? ====&lt;br /&gt;
In this case, I don't believe I can do a better job explaining shells than [[wikipedia:Shell_(computing)|this]].&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
Elsewhere at Kansas State University, the default Shell is set to tcsh. tcsh stands for &amp;quot;TENEX C SHell.&amp;quot; It is considered a replacement for csh and uses many of the same features. If you have experience with either csh or tcsh you'll probably feel right at home. This was the default shell until July of 2013. If you had an account before then, it is probably still tcsh.&lt;br /&gt;
&lt;br /&gt;
But what if you don't want or like tcsh, what can you do? Well, we have other shells available of Beocat as well.&lt;br /&gt;
==== bash ====&lt;br /&gt;
[http://www.gnu.org/software/bash/ Bash] seems to be the defacto standard shell in most Linux installs today. Bash is common and probably what most of you are used to. As of July 2013, bash is our new default shell. All new users will be set to bash initially. [https://software-carpentry.org/ Software Carpentry] teaches classes on several subjects specifically targeting researchers, including the bash shell. Their documentation is all freely available. [http://swcarpentry.github.io/shell-novice/ Here is a link to their excellent tutorial on using BASH.] Most of our documentation assumes you are using BASH.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;bash configuration files:&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This section gets into some minutiae with the way our job scheduler interacts with bash. If you're trying to solve a problem, read on, otherwise you can probably skip this section.&lt;br /&gt;
&lt;br /&gt;
Bash has 3 user configurable configuration files, &amp;lt;code&amp;gt;~/.bashrc ~/.bash_profile&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_logout&amp;lt;/code&amp;gt;. We'll look at the two more relevant ones &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Bash has 3 ways of looking at things, '''login''', '''interactive''', or '''none'''.&lt;br /&gt;
&lt;br /&gt;
Normally what happens is that shells that are '''interactive''' read &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;, shells that are '''login''' read &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. '''none''' shells read neither.&lt;br /&gt;
&lt;br /&gt;
sbatch jobs are '''login''', srun jobs are '''login+interactive''', logging into Beocat in a way that you can enter commands is '''login+interactive'''. There are very few cases that you will get '''none'''. For any session that isn't '''interactive''', your sourced files cannot output anything to the screen, or else it can break scp or sftp file transfers.&lt;br /&gt;
&lt;br /&gt;
If they are ''quiet'' statements, and you want them in all shells, you can put them in your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. If they are not ''quiet'' or they output ''anything'' to the screen, you must put them in your &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
[http://zsh.sourceforge.net/ zsh] is an alternative to bash and tcsh. It tends to support more complex features than either of the other two while using a syntax remarkably similar to bash. Unless specifically noted, when we specify '''Change your shell to bash''', &amp;lt;tt&amp;gt;zsh&amp;lt;/tt&amp;gt; should work as well.&lt;br /&gt;
&lt;br /&gt;
==== Changing Shells ====&lt;br /&gt;
Previously, we gave you the option of using a &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; to modify your shell. This is no longer supported, if you have issues with your shell/paths/environment variables we will ask you to delete your &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; file and change your shell via the method below.&lt;br /&gt;
&lt;br /&gt;
You can change your shell is via &amp;lt;code&amp;gt;chsh&amp;lt;/code&amp;gt; on either of the headnodes (eos/selene). This does not need to be re-done if you've changed to it to your preferred shell in the past.&lt;br /&gt;
&lt;br /&gt;
Use the appropriate of the following three lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
/usr/local/bin/chsh -s bash &amp;amp;&amp;amp; bash -l&lt;br /&gt;
/usr/local/bin/chsh -s tcsh &amp;amp;&amp;amp; tcsh -l&lt;br /&gt;
/usr/local/bin/chsh -s zsh &amp;amp;&amp;amp; zsh -l&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Changing your PATH ===&lt;br /&gt;
Typically, you don't have to change your PATH, but it is useful to know what your PATH is and what it does. The PATH is the list of directories which are searched when you type the name of a program. Note that by default the current directory is NOT included in the path, so if you were wanting to run a program called MyProgram in the current directory, you could NOT simply type 'MyProgram', you would instead type &amp;lt;code&amp;gt;'./MyProgram'&amp;lt;/code&amp;gt; (where the '.' represents the current directory).&lt;br /&gt;
&lt;br /&gt;
To find your PATH, we need to identify which shell you are using. If you do not know, run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
ps | awk '/sh/ {print $4}'&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .tcshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
setenv PATH /usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== bash ====&lt;br /&gt;
You'll need to edit a file in your home directory called .bashrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=/usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .zshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=&amp;quot;/usr/local/bin:$PATH&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Ownership and Permissions ===&lt;br /&gt;
Every file and directory has a user and group associated with it. You can view ownership information by using the '-l' switch on ls. By default on Beocat, files you create have a user ownership of your username (i.e., your eID) and a group ownership of your username_users. So, if I were logged in as 'myusername' and I had a single file in my home directory called MyProgram, the result of typing 'ls -l' would be something like this:&lt;br /&gt;
 total 0&lt;br /&gt;
 -rwxr-x--- 1 myusername myusername_users 79 May 31  2011 MyProgram&lt;br /&gt;
This tells us several things.&lt;br /&gt;
* The first column ('-rwxr-x---') is permissions (covered below)&lt;br /&gt;
* The second column ('1') is the number of links to this file. You can safely ignore this (unless you're both masochistic and interested in filesystem details)&lt;br /&gt;
* The third column ('myusername') shows the user ownership&lt;br /&gt;
* The fourth column ('myusername_users') shows the group ownership&lt;br /&gt;
* The fifth column ('79') gives the size of the file in bytes&lt;br /&gt;
* The next columns ('May 31  2011'), as you have probably guessed, gives the date the file was last changed&lt;br /&gt;
* The final column ('MyProgram') is the name of the file&lt;br /&gt;
&lt;br /&gt;
So why is this interesting to us? Because whenever things ''don't'' work, it's usually because of file ownership or permissions. Looking at these often gives us some useful diagnostic information.&lt;br /&gt;
&lt;br /&gt;
The permissions field shows us who has permissions to do what with this file. It is always 10 characters. The first character (-) is usually either a '-' for a regular file or a 'd' for a directory. The next 9 characters are broken into three groups of three, with each group showing read (r), write (w), and execute (x) permissions for the owner, group, and world, in that order.&lt;br /&gt;
* The first group (rwx) shows permissions for the owner (myusername). The owner here has read, write, and execute permissions&lt;br /&gt;
* The next group (r-x) shows permissions for the group (myusername_users). The group here has read and execute permissions, but cannot write.&lt;br /&gt;
* The last group (---) shows permissions for the rest of the world. The world has no permissions to read, write, or execute.&lt;br /&gt;
&lt;br /&gt;
When you create a shell script with a text editor, and sometimes when you copy programs to Beocat via SCP, the execute flag is not set. The permissions string may look more like (-rw-r--r--). To change this, you need to give yourself permission to execute this program. This is done with the 'chmod' (change mode) command. 'chmod' can have a long and confusing syntax, but since by far the most common problem is to give yourself execute permissions, here is the command to change that:&lt;br /&gt;
 chmod u+x MyProgram&lt;br /&gt;
This changes the permissions so that the user ('u', i.e., the owner) adds ('+') execute permission ('x').&lt;br /&gt;
&lt;br /&gt;
For more complex ownership or permissions changes, please feel free to contact the Beocat staff.&lt;br /&gt;
&lt;br /&gt;
=== Access Control Lists ===&lt;br /&gt;
Access Control Lists build on our knowledge and use of basic Linux permissions, so we'll cover those again:&lt;br /&gt;
&lt;br /&gt;
Linux permissions are typically broken down to ('''r''')ead, ('''w''')rite, and e('''x''')ecute split across 3 classes of accessors.&lt;br /&gt;
&lt;br /&gt;
'''Files'''&lt;br /&gt;
; read&lt;br /&gt;
: Read the file, pretty straight forward&lt;br /&gt;
; write&lt;br /&gt;
: Write to the file, including overwrite, truncation, etc.&lt;br /&gt;
; execute&lt;br /&gt;
: Execute the file, this permission allows you to run the file.&lt;br /&gt;
&lt;br /&gt;
'''Folders'''&lt;br /&gt;
; read&lt;br /&gt;
: List the directory, (ls)&lt;br /&gt;
; write&lt;br /&gt;
: Create new files and folders in the directory.&lt;br /&gt;
; execute&lt;br /&gt;
: Pass through the directory (cd into and through).&lt;br /&gt;
&lt;br /&gt;
Those accessors are ('''u''')ser, ('''g''')roup, and ('''o''')ther.&lt;br /&gt;
; user&lt;br /&gt;
: The user would typically be the user account that created the file or folder&lt;br /&gt;
; group&lt;br /&gt;
: The group would be that accounts primary group by default or can be changed by the user to be any group that they are a member of&lt;br /&gt;
; other&lt;br /&gt;
: Other is special, other is anything that doesn't meet either of the two other critera. We typically refer to them as world permissions, as they match ''everyone'' else.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, it is that &amp;quot;Other&amp;quot; permission that is a frequent problem. You may want to share some data with a colleague, but, from a security standpoint, you also may need to make sure that only that colleague has access to the data. If you aren't in the same group as the colleague, then, under standard Linux permissions, you have no other option except making the file &amp;quot;world&amp;quot; accessible.&lt;br /&gt;
&lt;br /&gt;
This is where &amp;lt;abbr title=&amp;quot;Access Control Lists&amp;quot;&amp;gt;ACLs&amp;lt;/abbr&amp;gt; come into play. ACLs are like the standard Linux permissions, except you can apply many of them, and you can allow individual users and groups to access alongside your own.&lt;br /&gt;
&lt;br /&gt;
ACLs can also do things that standard Linux permissions can't, like setting up &amp;quot;default&amp;quot; permissions for newly created files/folders within a directory.&lt;br /&gt;
&lt;br /&gt;
One big thing to be aware of for any permissions scheme is that permissions are checked at every level in a directory hierarchy.&lt;br /&gt;
&lt;br /&gt;
# /&lt;br /&gt;
# /homes&lt;br /&gt;
# /homes/$USER&lt;br /&gt;
# /homes/$USER/$SHARE&lt;br /&gt;
&lt;br /&gt;
If at any point the accessing user is denied permission, the traversal and access attempt will stop.&lt;br /&gt;
&lt;br /&gt;
==== Example 1 ====&lt;br /&gt;
Let's say I have a file in a directory that I want the user billy to be able to read. This file is &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt;. We'll look at the current permissions of the directory tree like so:&lt;br /&gt;
&lt;br /&gt;
We'll assume everyone has requisite permissions for &amp;lt;tt&amp;gt;/&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;/homes&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
First we'll check my home directory&lt;br /&gt;
 $ getfacl -e /homes/mozes&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x                      #effective:r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 mask::r-x&lt;br /&gt;
 other::---&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
 default:other::---&lt;br /&gt;
&lt;br /&gt;
If we make it past that permissions check, we'd go one level deeper.&lt;br /&gt;
 $ getfacl -e /homes/mozes/example&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r-x&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
&lt;br /&gt;
Finally we'd check if we had permission to access the file itself:&lt;br /&gt;
 $ getfacl -e /homes/mozes/example/input.file&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rw-&lt;br /&gt;
 group::r--&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r--&lt;br /&gt;
&lt;br /&gt;
There is quite a lot of information contained in the the above output, so lets look at and attempt to understand the contents.&lt;br /&gt;
&lt;br /&gt;
First, in each section, we see the POSIX owner and group as comments prefixed by '#' characters. These are what the respective user:: and group:: lines refer to when viewing the permissions.&lt;br /&gt;
&lt;br /&gt;
Second, we have lines related to the permissions of each accessor. These do what they say, show the permissions that an accessor would be granted. Please note there is a catch here, it seems to be most specific permission wins. This can come into play with granting a certain group access and then a specific member of the group a differing level of access.&lt;br /&gt;
&lt;br /&gt;
Third, on many of them there are lines prefixed with default: and then a permissions set. Default permissions are interesting. They are only set to directories and they define the starting set of acls that should be set when new files or folders are created within that folder.&lt;br /&gt;
&lt;br /&gt;
Finally, there is a mask, I'm not covering it because there are very few cases that people would need to use them.&lt;br /&gt;
&lt;br /&gt;
Back to the task at hand, We want billy to be able to read &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt;. Checking &amp;lt;tt&amp;gt;/homes/mozes&amp;lt;/tt&amp;gt;, we see that 'other' has no permissions, and billy has not been granted any special access.&lt;br /&gt;
&lt;br /&gt;
So we grant billy access &amp;quot;through&amp;quot; &amp;lt;tt&amp;gt;/homes/mozes&amp;lt;/tt&amp;gt;, granting the smallest set of permissions would be this:&lt;br /&gt;
 $ setfacl -m u:billy:x /homes/mozes&lt;br /&gt;
&lt;br /&gt;
Note, since I didn't give billy read access to my home directory, they wouldn't be able to &amp;lt;tt&amp;gt;ls /homes/mozes&amp;lt;/tt&amp;gt;, they can simply cd into it and through it.&lt;br /&gt;
&lt;br /&gt;
Then we check the rest of the permissions, &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt; has an 'other' permission granting (r)ead and e(x)ecute, so that shouldn't be an issue. &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt; allows 'other' to read it, so our job is done. Billy has access to read my file.&lt;br /&gt;
&lt;br /&gt;
If we decide later that billy needs to write to my file, we can grant them specific read/write permissions to just that file with:&lt;br /&gt;
 $ setfacl -m u:billy:rw /homes/mozes/example/input.file&lt;br /&gt;
&lt;br /&gt;
==== Example 2 ====&lt;br /&gt;
That's all well and good, but lets say we want all of my grad students to have read/write access to my example directory.&lt;br /&gt;
&lt;br /&gt;
Looking at the acls that have been set so far:&lt;br /&gt;
 $ getfacl -e /homes/mozes&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 user:billy:--x&lt;br /&gt;
 group::r-x                      #effective:r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 mask::r-x&lt;br /&gt;
 other::---&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
 default:other::---&lt;br /&gt;
&lt;br /&gt;
 $ getfacl -e /homes/mozes/example&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r-x&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
&lt;br /&gt;
We now want to grant my group of grad students the correct permissions to &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt;&lt;br /&gt;
 $ setfacl -R -m g:my_grad_students:rw -m d:g:my_grad_students:rw -m d:u:mozes:rw /homes/mozes/example&lt;br /&gt;
&lt;br /&gt;
There are a few things to note there:&lt;br /&gt;
* We're setting multiple acls at once (note the multiple -m arguments)&lt;br /&gt;
* We're setting those permissions recursively (on all files/folders nested anywhere in that directory hierarchy). The &amp;lt;tt&amp;gt;-R&amp;lt;/tt&amp;gt; option does this.&lt;br /&gt;
* We're setting some default permissions. Default permissions are prefixed with d:. Here we're saying that the (g)roup my_grad_students should be granted read/write permissions, we aslo set a default permission for ourselves. d:u:mozes:rw grants me read/write access to those files as if I were the owner, this is nice in the event that you're not a member of the my_grad_students group, it would make sure that you still retain a reasonable baseline of access.&lt;br /&gt;
&lt;br /&gt;
That all looks good, right? Except my grad students are complaining that they can't access &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt;. What did we forget?&lt;br /&gt;
&lt;br /&gt;
Permissions are checked at every level of the directory hierarchy, and we forgot to grant my_grad_students access through my home directory.&lt;br /&gt;
 $ setfacl -m g:my_grad_students:x /homes/mozes&lt;br /&gt;
&lt;br /&gt;
=== Manual (man) pages ===&lt;br /&gt;
Most commands have a complex set of switches that will modify the amount or type of information they display. To find out what switches are available, or how a program expects data, you can use the manual pages by typing &amp;quot;`man` ''command''&amp;quot;. Using one of the most common Linux commands, take a look the output of 'man ls'. It shows that it has over 50 switches available, ranging from which files to include, to how to display file sizes, to sort order and more. (I'm not pasting it here, because it's over 200 lines long!) To navigate a 'manpage', use the up-arrow and down-arrow keys. Press 'q' to quit.&lt;br /&gt;
&lt;br /&gt;
=== Pipes and Redirects ===&lt;br /&gt;
Typically a Linux program takes data from the keyboard and outputs data to the screen. In Unix and Linux terminology, the keyboard is the default 'stdin' (pronounced &amp;quot;standard in&amp;quot;) and the screen is the default 'stdout' (pronounced &amp;quot;standard out&amp;quot;). Many times, we want to take data from somewhere else (like a file, or the output of another program) and send it to yet another location. These redirectors are:&lt;br /&gt;
{|&lt;br /&gt;
|cmd &amp;gt; filename&lt;br /&gt;
|Redirect output from cmd to filename ||&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;gt;&amp;gt; filename&lt;br /&gt;
|Redirect output from cmd and append to filename&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;lt; filename&lt;br /&gt;
|Redirect input from cmd to filename&lt;br /&gt;
|-&lt;br /&gt;
| cmd1 &amp;amp;#124; cmd2&lt;br /&gt;
| Use the output from cmd1 as the input to cmd2&lt;br /&gt;
|}&lt;br /&gt;
Here is a quick example. Let's say I have a thousands of files in a directory, and I want a list of those that end in '.sh'&lt;br /&gt;
'ls' by itself scrolls so far I can't see even a fraction of them. So, I redirect the output to a file&lt;br /&gt;
 ls &amp;gt; ~/filelist.txt&lt;br /&gt;
That gives me all the files in the current folder and saves them in my home directory in 'filelist.txt'.&lt;br /&gt;
A quick look through the file in my favorite editor tells me this is still going to take too long, so I need another step. The 'grep' program is a commonly-used program to perform pattern matching. The syntax of 'grep' is beyond the scope of this document, but take my word for it that&lt;br /&gt;
 grep '\.sh$'&lt;br /&gt;
will return all lines that end in .sh.&lt;br /&gt;
&lt;br /&gt;
We can now redirect the input from grep to the file we just created:&lt;br /&gt;
 grep '\.sh$' &amp;lt; ~/filelist.txt&lt;br /&gt;
Great! We now have our list. However, we wanted to save this as filelist.txt, and instead we have another list that we have to copy-and-paste. Instead of redirecting to a file, we'll use the vertical bar '|' (which we often term a &amp;quot;pipe&amp;quot;) to send the output of one command to another.&lt;br /&gt;
 ls | grep '\.sh$' &amp;gt; ~/filelist.txt&lt;br /&gt;
This time the output of 'ls' is ''not'' redirected to a file, but is redirected to the next command (grep).  The output of grep (which is all our .sh files) instead of being sent to the screen is redirected to the file ~/filelist.txt.&lt;br /&gt;
&lt;br /&gt;
This example is a very simple demonstration of how pipes and redirects work. Many more examples with complex structures can be found at http://www.ibm.com/developerworks/linux/library/l-lpic1-v3-103-4/index.html&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=950</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=950"/>
		<updated>2023-08-25T23:22:01Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* How are the filesystems on Beocat set up? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If your account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has excessive prompts for managing files.&lt;br /&gt;
: Filezilla opens one connection for browsing the system. Transferring files opens 1-4 additional connections when the transfers start. Once they finish, those connections disconnect. If you start additional transfers, new connections will be opened. Every one of those connections must be approved through Duo MFA on your smart device. You can adjust the number of connections that FileZilla opens for transfers if you like. &amp;lt;tt&amp;gt;File -&amp;gt; Site Manager -&amp;gt; (choose the site you're changing) -&amp;gt; Transfer Settings -&amp;gt; Limit number of simultaneous connections&amp;lt;/tt&amp;gt;.&lt;br /&gt;
: Another option is to disable processing the transfer queue, add the things to it you want to transfer and then re-enable the transfer queue. Then at least it will re-use the connections until the queue is empty.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /homes or /bulk, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands. As with other permissions, the individuals will need access through every level of the directory hierarchy. [[LinuxBasics#Access_Control_Lists|It may be best to review our more in-depth topic on Access Control Lists.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=949</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=949"/>
		<updated>2023-08-25T23:21:40Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* How are the filesystems on Beocat set up? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If your account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has excessive prompts for managing files.&lt;br /&gt;
: Filezilla opens one connection for browsing the system. Transferring files opens 1-4 additional connections when the transfers start. Once they finish, those connections disconnect. If you start additional transfers, new connections will be opened. Every one of those connections must be approved through Duo MFA on your smart device. You can adjust the number of connections that FileZilla opens for transfers if you like. &amp;lt;tt&amp;gt;File -&amp;gt; Site Manager -&amp;gt; (choose the site you're changing) -&amp;gt; Transfer Settings -&amp;gt; Limit number of simultaneous connections&amp;lt;/tt&amp;gt;.&lt;br /&gt;
: Another option is to disable processing the transfer queue, add the things to it you want to transfer and then re-enable the transfer queue. Then at least it will re-use the connections until the queue is empty.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes and /scratch || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /homes or /bulk, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands. As with other permissions, the individuals will need access through every level of the directory hierarchy. [[LinuxBasics#Access_Control_Lists|It may be best to review our more in-depth topic on Access Control Lists.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=948</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=948"/>
		<updated>2023-08-16T17:23:44Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Common Storage For Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If your account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has excessive prompts for managing files.&lt;br /&gt;
: Filezilla opens one connection for browsing the system. Transferring files opens 1-4 additional connections when the transfers start. Once they finish, those connections disconnect. If you start additional transfers, new connections will be opened. Every one of those connections must be approved through Duo MFA on your smart device. You can adjust the number of connections that FileZilla opens for transfers if you like. &amp;lt;tt&amp;gt;File -&amp;gt; Site Manager -&amp;gt; (choose the site you're changing) -&amp;gt; Transfer Settings -&amp;gt; Limit number of simultaneous connections&amp;lt;/tt&amp;gt;.&lt;br /&gt;
: Another option is to disable processing the transfer queue, add the things to it you want to transfer and then re-enable the transfer queue. Then at least it will re-use the connections until the queue is empty.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes and /scratch || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /scratch || Shared || 3.1PB shared with /bulk and /homes || cephfs || Fast shared tmp space; files not used in 30 days are automatically culled&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /scratch, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands. As with other permissions, the individuals will need access through every level of the directory hierarchy. [[LinuxBasics#Access_Control_Lists|It may be best to review our more in-depth topic on Access Control Lists.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=947</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=947"/>
		<updated>2023-08-16T17:23:26Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Common Storage For Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If your account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has excessive prompts for managing files.&lt;br /&gt;
: Filezilla opens one connection for browsing the system. Transferring files opens 1-4 additional connections when the transfers start. Once they finish, those connections disconnect. If you start additional transfers, new connections will be opened. Every one of those connections must be approved through Duo MFA on your smart device. You can adjust the number of connections that FileZilla opens for transfers if you like. &amp;lt;tt&amp;gt;File -&amp;gt; Site Manager -&amp;gt; (choose the site you're changing) -&amp;gt; Transfer Settings -&amp;gt; Limit number of simultaneous connections&amp;lt;/tt&amp;gt;.&lt;br /&gt;
: Another option is to disable processing the transfer queue, add the things to it you want to transfer and then re-enable the transfer queue. Then at least it will re-use the connections until the queue is empty.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes and /scratch || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /scratch || Shared || 3.1PB shared with /bulk and /homes || cephfs || Fast shared tmp space; files not used in 30 days are automatically culled&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /scratch, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands. As with other permissions, the individuals will need access through every level of the directory hierarchy. [[LinuxBasics#Access_Control_Lists It may be best to review our more in-depth topic on Access Control Lists.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=946</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=946"/>
		<updated>2023-08-09T19:56:20Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* File Access */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. 'kstat -g' will show you the GPU nodes and the jobs running on them.  To request a GPU node, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; for example to request 1 GPU for your job; if your job uses multiple nodes, the number of GPUs requested is per-node.  You can also request a given type of GPU (kstat -g -l to show types) by using &amp;lt;tt&amp;gt;--gres=gpu:geforce_gtx_1080_ti:1&amp;lt;/tt&amp;gt; for a 1080Ti GPU on the Wizards or Dwarves, &amp;lt;tt&amp;gt;--gres=gpu:quadro_gp100:1&amp;lt;/tt&amp;gt; for the P100 GPUs on Wizard20-21 that are best for 64-bit codes like Vasp.  Most of these GPU nodes are owned by various groups.  If you want access to GPU nodes and your group does not own any, we can add you to the &amp;lt;tt&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/tt&amp;gt; group that has priority on Dwarf36-39.  For more information on compiling CUDA code click on this [[CUDA]] link.&lt;br /&gt;
&lt;br /&gt;
A listing of the current types of gpus can be gathered with this command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
scontrol show nodes | grep CfgTRES | tr ',' '\n' | awk -F '[:=]' '/gres\/gpu:/ { print $2 }' | sort -u&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
At the time of this writing, that command produces this list:&lt;br /&gt;
* geforce_gtx_1080_ti&lt;br /&gt;
* geforce_rtx_2080_ti&lt;br /&gt;
* geforce_rtx_3090&lt;br /&gt;
* quadro_gp100&lt;br /&gt;
* rtx_a4000&lt;br /&gt;
&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
''Intra''node jobs run on many cores in the same node. These jobs can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or any programming language that has the concept of ''threads''. Often, your program will need to know how many cores you want it to use, and many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--nodes=1 --cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
''Inter''node jobs can utilize many cores on one or more nodes. Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--nodes=''n'' --ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=10 --ntasks=100&amp;lt;/tt&amp;gt; will give you a total of 100 cores across 10 nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=avx2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## There is one strict rule for guaranteeing Slurm reads all of your options:&lt;br /&gt;
## Do not put *any* lines above your resource requests that aren't either:&lt;br /&gt;
##    1) blank. (no other characters)&lt;br /&gt;
##    2) comments (lines must begin with '#')&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres=gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocat@cs.ksu.edu to see how we can assist in&lt;br /&gt;
## getting your job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance.  Those needing more storage may purchase /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up. The /fastscratch file system is a zfs host with lots of NVME drives provide much faster&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to /fastscratch, the local disk on each node, or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in a purchased /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Due to the cost, directories will be provided when we are contacted and provided with payment information.&lt;br /&gt;
&lt;br /&gt;
===Fast Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /fastscratch file system is faster than /bulk or /homes.&lt;br /&gt;
In order to use fastscratch, you first need to make a directory for yourself.  &lt;br /&gt;
Fast Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the fastscratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /fastscratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  Use 'kstat -l -h' to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&lt;br /&gt;
# Once the move is complete, the Supervisor should limit the permissions for the directory again by removing the student's access:&lt;br /&gt;
chown $USER: -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
In the past, Beocat users have been allowed to keep their&lt;br /&gt;
/homes and /bulk directories open so that any other user could&lt;br /&gt;
access files.  In order to bring Beocat into alignment with&lt;br /&gt;
State of Kansas regulations and industry norms, all users must now have their /homes /bulk /scratch and /fastscratch directories&lt;br /&gt;
locked down from other users, but can still share files and directories within their group or with individual users&lt;br /&gt;
using group and individual ACLs (Access Control Lists) which will be explained below.&lt;br /&gt;
Beocat staff will be exempted from this&lt;br /&gt;
policy as we need to work freely with all users and will manage our&lt;br /&gt;
subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory with the setacls script===&lt;br /&gt;
&lt;br /&gt;
If you do not wish to share files or directories with other users, you do not need to do anything&lt;br /&gt;
as rwx access to others has already been removed.&lt;br /&gt;
If you want to share files or directories you can either use the **setacls** script or configure&lt;br /&gt;
the ACLs (Access Control Lists) manually.&lt;br /&gt;
&lt;br /&gt;
The '''setacls -h''' will show how to use the script.&lt;br /&gt;
  &lt;br /&gt;
  Eos: setacls -h&lt;br /&gt;
  setacls [-r] [-w] [-g group] [-u user] -d /full/path/to/directory&lt;br /&gt;
  Execute pemission will always be applied, you may also choose r or w&lt;br /&gt;
  Must specify at least one group or user&lt;br /&gt;
  Must specify at least one directory, and it must be the full path&lt;br /&gt;
  Example: setacls -r -g ksu-cis-hpc -u mozes -d /homes/daveturner/shared_dir&lt;br /&gt;
&lt;br /&gt;
You can specify the permissions to be either -r for read or -w for write or you can specify both.&lt;br /&gt;
You can provide a priority group to share with, which is the same as the group used in a --partition=&lt;br /&gt;
statement in a job submission script.  You can also specify users.&lt;br /&gt;
You can specify a file or a directory to share.  If the directory is specified then all files in that&lt;br /&gt;
directory will also be shared, and all files created in the directory laster will also be shared.&lt;br /&gt;
&lt;br /&gt;
The script will set everything up for you, telling you the commands it is executing along the way,&lt;br /&gt;
then show the resulting ACLs at the end with the '''getfacl''' command.&lt;br /&gt;
&lt;br /&gt;
====Manually configuring your ACLs====&lt;br /&gt;
&lt;br /&gt;
If you want to manually configure the ACLs you can use the directions below to do what the **setacls** &lt;br /&gt;
script would do for you.&lt;br /&gt;
You first need to provide the minimum execute access to your /homes&lt;br /&gt;
or /bulk directory before sharing individual subdirectories.  Setting the ACL to execute only will allow those &lt;br /&gt;
in your group to get access to subdirectories while not including read access will mean they will not&lt;br /&gt;
be able to see other files or subdirectories on your main directory, but do keep in mind that they can still access them&lt;br /&gt;
so you may want to still lock them down manually.  Below is an example of how I would change my&lt;br /&gt;
/homes/daveturner directory to allow ksu-cis-hpc group execute access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:X /homes/daveturner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your research group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share_hpc', &lt;br /&gt;
then providing access to my ksu-cis.hpc group&lt;br /&gt;
(my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q).&lt;br /&gt;
Using -R will make these changes recursively to all files and directories in that subdirectory while changing the defaults with the setfacl -d command will ensure that files and directories created&lt;br /&gt;
later will be done so with these same ACLs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc&lt;br /&gt;
# ACLs are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share_hpc' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:ksu-cis-hpc:rwX -R share_hpc'&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
groups&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself&lt;br /&gt;
by emailing us at&lt;br /&gt;
beocat@cs.ksu.edu&lt;br /&gt;
.&lt;br /&gt;
If you want to share a directory with only a few people you can manage your ACLs using individual usernames&lt;br /&gt;
instead of with a group.&lt;br /&gt;
&lt;br /&gt;
You can use the '''getfacl''' command to see groups have access to a given directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
getfacl share_hpc&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ACLs give you great flexibility in controlling file access at the&lt;br /&gt;
group level.  Below is a more advanced example where I set up a directory to be shared with&lt;br /&gt;
my ksu-cis-hpc group, Dan's ksu-cis-dan group, and an individual user 'mozes' who I also want&lt;br /&gt;
to have write access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc_dan_mozes&lt;br /&gt;
# acls are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
getfacl share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc_dan_mozes&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  user:mozes:rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  group:ksu-cis-dan:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:user:mozes:rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:group:ksu-cis-dan:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::--x&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
cd&lt;br /&gt;
mkdir public_html&lt;br /&gt;
# Opt-in to letting the webserver access your home directory:&lt;br /&gt;
setfacl -m g:public_html:x ~/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
We have a page here dedicated to [[Globus]]&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --array specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint/Restart using DMTCP ==&lt;br /&gt;
&lt;br /&gt;
DMTCP is Distributed Multi-Threaded CheckPoint software that will checkpoint your application without modification, and&lt;br /&gt;
can be set up to automatically restart your job from the last checkpoint if for example the node you are running on fails.  &lt;br /&gt;
This has been tested successfully&lt;br /&gt;
on Beocat for some scalar and OpenMP codes, but has failed on all MPI tests so far.  We would like to encourage users to&lt;br /&gt;
try DMTCP out if their non-MPI jobs run longer than 24 hours.  If you want to try this, please contact us first since we are still&lt;br /&gt;
experimenting with DMTCP.&lt;br /&gt;
&lt;br /&gt;
The sample job submission script below shows how dmtcp_launch is used to start the application, then dmtcp_restart is used to start from a checkpoint if the job has failed and been rescheduled.&lt;br /&gt;
If you are putting this in an array script, then add the Slurm array task ID to the end of the ckeckpoint directory name&lt;br /&gt;
like &amp;lt;B&amp;gt;ckptdir=ckpt-$SLURM_ARRAY_TASK_ID&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --mem=50G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS/2016.4-foss-2017beocatb-hybrid&lt;br /&gt;
  module load DMTCP&lt;br /&gt;
  module list&lt;br /&gt;
  &lt;br /&gt;
  ckptdir=ckpt&lt;br /&gt;
  mkdir -p $ckptdir&lt;br /&gt;
  export DMTCP_CHECKPOINT_DIR=$ckptdir&lt;br /&gt;
  &lt;br /&gt;
  if ! ls -1 $ckptdir | grep -c dmtcp_restart_script &amp;gt; /dev/null&lt;br /&gt;
  then&lt;br /&gt;
     echo &amp;quot;Using dmtcp_launch to start the app the first time&amp;quot;&lt;br /&gt;
     dmtcp_launch --no-coordinator mpirun -np 1 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -nsteps 50000 -ntomp 4 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  else&lt;br /&gt;
     echo &amp;quot;Using dmtcp_restart from $ckptdir to continue from a checkpoint&amp;quot;&lt;br /&gt;
     dmtcp_restart $ckptdir/*.dmtcp&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
You will need to run several tests to verify that DMTCP is working properly with your application.&lt;br /&gt;
First, run a short test without DMTCP and another with DMTCP with the checkpoint interval set to 5 minutes&lt;br /&gt;
by adding the line &amp;lt;B&amp;gt;export DMTCP_CHECKPOINT_INTERVAL=300&amp;lt;/B&amp;gt; to your script.  Then use &amp;lt;B&amp;gt;kstat -d 1&amp;lt;/B&amp;gt; to&lt;br /&gt;
check that the memory in both runs is close to the same.  Also use this information to calculate the time &lt;br /&gt;
that each checkpoint takes.  In most cases I've seen times less than a minute for checkpointing that will normally&lt;br /&gt;
be done once each hour.  If your application is taking more time, let us know.  Sometimes this can be sped up&lt;br /&gt;
by simply turning off compression by adding the line &amp;lt;B&amp;gt;export DMTCP_GZIP=0&amp;lt;/B&amp;gt;.  Make sure to remove the&lt;br /&gt;
line where you set the checkpoint interval to 300 seconds so that the default time of once per hour will be used.&lt;br /&gt;
&lt;br /&gt;
After verifying that your code completes using DMTCP and does not take significantly more time or memory, you&lt;br /&gt;
will need to start a run then &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; it after the first checkpoint, then resubmit the same script to make &lt;br /&gt;
sure that it restarts and runs to completion.  If you are working with an array job script, the last is to try a few&lt;br /&gt;
array tasks at once to make sure there is no conflict between the jobs.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;--gres=killable:1&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;scancel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, the group gets access to a &amp;quot;partition&amp;quot; giving you priority on those nodes.&lt;br /&gt;
&lt;br /&gt;
In most situations, the scheduler will automatically add those priority partitions to the jobs as submitted. You should not need to include a partition list in your job submission.&lt;br /&gt;
&lt;br /&gt;
There are currently just a few exceptions that we will not automatically add:&lt;br /&gt;
* ksu-chem-mri.q&lt;br /&gt;
* ksu-gen-gpu.q&lt;br /&gt;
* ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
If you have access to those any of the non-automatic partitions, and have need of the resources in that partition, you can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
Otherwise, you shouldn't modify the partition line at all unless you really know what you're doing.&lt;br /&gt;
&lt;br /&gt;
== Graphical Applications ==&lt;br /&gt;
Some applications are graphical and need to have some graphical input/output. We currently accomplish this with X11 forwarding or [[OpenOnDemand]]&lt;br /&gt;
=== OpenOnDemand ===&lt;br /&gt;
[[OpenOnDemand]] is likely the easier and more performant way to run a graphical application on the cluster.&lt;br /&gt;
# visit [https://ondemand.beocat.ksu.edu/ ondemand] and login with your cluster credentials.&lt;br /&gt;
# Check the &amp;quot;Interactive Apps&amp;quot; dropdown. We may have a workflow ready for you. If not choose the desktop.&lt;br /&gt;
# Select the resources you need&lt;br /&gt;
# Select launch&lt;br /&gt;
# A job is now submitted to the cluster and once the job is started you'll see a Connect button&lt;br /&gt;
# use the app as needed. If using the desktop, start your graphical application.&lt;br /&gt;
&lt;br /&gt;
=== X11 Forwarding ===&lt;br /&gt;
==== Connecting with an X11 client ====&lt;br /&gt;
===== Windows =====&lt;br /&gt;
If you are running Windows, we recommend MobaXTerm as your file/ssh manager, this is because it is one relatively simple tool to do everything. MobaXTerm also automatically connects with X11 forwarding enabled.&lt;br /&gt;
===== Linux/OSX =====&lt;br /&gt;
Both Linux and OSX can connect in an X11 forwarding mode. Linux will have all of the tools you need installed already, OSX will need [https://www.xquartz.org/ XQuartz] installed.&lt;br /&gt;
&lt;br /&gt;
Then you will need to change your 'ssh' command slightly:&lt;br /&gt;
&lt;br /&gt;
 ssh -Y eid@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
The '''-Y''' argument tells ssh to setup X11 forwarding.&lt;br /&gt;
==== Starting an Graphical job ====&lt;br /&gt;
All graphical jobs, by design, must be interactive, so we'll use the srun command. On a headnode, we run the following:&lt;br /&gt;
 # load an X11 enabled application&lt;br /&gt;
 module load Octave&lt;br /&gt;
 # start an X11 job, sbatch arguments are accepted for srun as well, 1 node, 1 hour, 1 gb of memory&lt;br /&gt;
 srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 octave --gui&lt;br /&gt;
&lt;br /&gt;
Because these jobs are interactive, they may not be able to run at all times, depending on how busy the scheduler is at any point in time. '''--pty --x11''' are required arguments setting up the job, and '''octave --gui''' is the command to run inside the job.&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=945</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=945"/>
		<updated>2023-08-09T19:55:39Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Local disk */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. 'kstat -g' will show you the GPU nodes and the jobs running on them.  To request a GPU node, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; for example to request 1 GPU for your job; if your job uses multiple nodes, the number of GPUs requested is per-node.  You can also request a given type of GPU (kstat -g -l to show types) by using &amp;lt;tt&amp;gt;--gres=gpu:geforce_gtx_1080_ti:1&amp;lt;/tt&amp;gt; for a 1080Ti GPU on the Wizards or Dwarves, &amp;lt;tt&amp;gt;--gres=gpu:quadro_gp100:1&amp;lt;/tt&amp;gt; for the P100 GPUs on Wizard20-21 that are best for 64-bit codes like Vasp.  Most of these GPU nodes are owned by various groups.  If you want access to GPU nodes and your group does not own any, we can add you to the &amp;lt;tt&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/tt&amp;gt; group that has priority on Dwarf36-39.  For more information on compiling CUDA code click on this [[CUDA]] link.&lt;br /&gt;
&lt;br /&gt;
A listing of the current types of gpus can be gathered with this command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
scontrol show nodes | grep CfgTRES | tr ',' '\n' | awk -F '[:=]' '/gres\/gpu:/ { print $2 }' | sort -u&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
At the time of this writing, that command produces this list:&lt;br /&gt;
* geforce_gtx_1080_ti&lt;br /&gt;
* geforce_rtx_2080_ti&lt;br /&gt;
* geforce_rtx_3090&lt;br /&gt;
* quadro_gp100&lt;br /&gt;
* rtx_a4000&lt;br /&gt;
&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
''Intra''node jobs run on many cores in the same node. These jobs can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or any programming language that has the concept of ''threads''. Often, your program will need to know how many cores you want it to use, and many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--nodes=1 --cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
''Inter''node jobs can utilize many cores on one or more nodes. Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--nodes=''n'' --ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=10 --ntasks=100&amp;lt;/tt&amp;gt; will give you a total of 100 cores across 10 nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=avx2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## There is one strict rule for guaranteeing Slurm reads all of your options:&lt;br /&gt;
## Do not put *any* lines above your resource requests that aren't either:&lt;br /&gt;
##    1) blank. (no other characters)&lt;br /&gt;
##    2) comments (lines must begin with '#')&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres=gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocat@cs.ksu.edu to see how we can assist in&lt;br /&gt;
## getting your job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance.  Those needing more storage may purchase /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up.  The /scratch filesystem provides a temporary space to store intermediary files that are needed for multiple jobs, or for files that are larger than your home directory. The /fastscratch file system is a zfs host with lots of NVME drives provide much faster&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to /fastscratch, the local disk on each node, or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in a purchased /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Due to the cost, directories will be provided when we are contacted and provided with payment information.&lt;br /&gt;
&lt;br /&gt;
===Fast Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /fastscratch file system is faster than /bulk or /homes.&lt;br /&gt;
In order to use fastscratch, you first need to make a directory for yourself.  &lt;br /&gt;
Fast Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the fastscratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /fastscratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  Use 'kstat -l -h' to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&lt;br /&gt;
# Once the move is complete, the Supervisor should limit the permissions for the directory again by removing the student's access:&lt;br /&gt;
chown $USER: -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
In the past, Beocat users have been allowed to keep their&lt;br /&gt;
/homes and /bulk directories open so that any other user could&lt;br /&gt;
access files.  In order to bring Beocat into alignment with&lt;br /&gt;
State of Kansas regulations and industry norms, all users must now have their /homes /bulk /scratch and /fastscratch directories&lt;br /&gt;
locked down from other users, but can still share files and directories within their group or with individual users&lt;br /&gt;
using group and individual ACLs (Access Control Lists) which will be explained below.&lt;br /&gt;
Beocat staff will be exempted from this&lt;br /&gt;
policy as we need to work freely with all users and will manage our&lt;br /&gt;
subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory with the setacls script===&lt;br /&gt;
&lt;br /&gt;
If you do not wish to share files or directories with other users, you do not need to do anything&lt;br /&gt;
as rwx access to others has already been removed.&lt;br /&gt;
If you want to share files or directories you can either use the **setacls** script or configure&lt;br /&gt;
the ACLs (Access Control Lists) manually.&lt;br /&gt;
&lt;br /&gt;
The '''setacls -h''' will show how to use the script.&lt;br /&gt;
  &lt;br /&gt;
  Eos: setacls -h&lt;br /&gt;
  setacls [-r] [-w] [-g group] [-u user] -d /full/path/to/directory&lt;br /&gt;
  Execute pemission will always be applied, you may also choose r or w&lt;br /&gt;
  Must specify at least one group or user&lt;br /&gt;
  Must specify at least one directory, and it must be the full path&lt;br /&gt;
  Example: setacls -r -g ksu-cis-hpc -u mozes -d /homes/daveturner/shared_dir&lt;br /&gt;
&lt;br /&gt;
You can specify the permissions to be either -r for read or -w for write or you can specify both.&lt;br /&gt;
You can provide a priority group to share with, which is the same as the group used in a --partition=&lt;br /&gt;
statement in a job submission script.  You can also specify users.&lt;br /&gt;
You can specify a file or a directory to share.  If the directory is specified then all files in that&lt;br /&gt;
directory will also be shared, and all files created in the directory laster will also be shared.&lt;br /&gt;
&lt;br /&gt;
The script will set everything up for you, telling you the commands it is executing along the way,&lt;br /&gt;
then show the resulting ACLs at the end with the '''getfacl''' command.&lt;br /&gt;
&lt;br /&gt;
====Manually configuring your ACLs====&lt;br /&gt;
&lt;br /&gt;
If you want to manually configure the ACLs you can use the directions below to do what the **setacls** &lt;br /&gt;
script would do for you.&lt;br /&gt;
You first need to provide the minimum execute access to your /homes&lt;br /&gt;
or /bulk directory before sharing individual subdirectories.  Setting the ACL to execute only will allow those &lt;br /&gt;
in your group to get access to subdirectories while not including read access will mean they will not&lt;br /&gt;
be able to see other files or subdirectories on your main directory, but do keep in mind that they can still access them&lt;br /&gt;
so you may want to still lock them down manually.  Below is an example of how I would change my&lt;br /&gt;
/homes/daveturner directory to allow ksu-cis-hpc group execute access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:X /homes/daveturner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your research group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share_hpc', &lt;br /&gt;
then providing access to my ksu-cis.hpc group&lt;br /&gt;
(my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q).&lt;br /&gt;
Using -R will make these changes recursively to all files and directories in that subdirectory while changing the defaults with the setfacl -d command will ensure that files and directories created&lt;br /&gt;
later will be done so with these same ACLs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc&lt;br /&gt;
# ACLs are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share_hpc' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:ksu-cis-hpc:rwX -R share_hpc'&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
groups&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself&lt;br /&gt;
by emailing us at&lt;br /&gt;
beocat@cs.ksu.edu&lt;br /&gt;
.&lt;br /&gt;
If you want to share a directory with only a few people you can manage your ACLs using individual usernames&lt;br /&gt;
instead of with a group.&lt;br /&gt;
&lt;br /&gt;
You can use the '''getfacl''' command to see groups have access to a given directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
getfacl share_hpc&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ACLs give you great flexibility in controlling file access at the&lt;br /&gt;
group level.  Below is a more advanced example where I set up a directory to be shared with&lt;br /&gt;
my ksu-cis-hpc group, Dan's ksu-cis-dan group, and an individual user 'mozes' who I also want&lt;br /&gt;
to have write access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc_dan_mozes&lt;br /&gt;
# acls are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
getfacl share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc_dan_mozes&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  user:mozes:rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  group:ksu-cis-dan:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:user:mozes:rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:group:ksu-cis-dan:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::--x&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
cd&lt;br /&gt;
mkdir public_html&lt;br /&gt;
# Opt-in to letting the webserver access your home directory:&lt;br /&gt;
setfacl -m g:public_html:x ~/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
We have a page here dedicated to [[Globus]]&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --array specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint/Restart using DMTCP ==&lt;br /&gt;
&lt;br /&gt;
DMTCP is Distributed Multi-Threaded CheckPoint software that will checkpoint your application without modification, and&lt;br /&gt;
can be set up to automatically restart your job from the last checkpoint if for example the node you are running on fails.  &lt;br /&gt;
This has been tested successfully&lt;br /&gt;
on Beocat for some scalar and OpenMP codes, but has failed on all MPI tests so far.  We would like to encourage users to&lt;br /&gt;
try DMTCP out if their non-MPI jobs run longer than 24 hours.  If you want to try this, please contact us first since we are still&lt;br /&gt;
experimenting with DMTCP.&lt;br /&gt;
&lt;br /&gt;
The sample job submission script below shows how dmtcp_launch is used to start the application, then dmtcp_restart is used to start from a checkpoint if the job has failed and been rescheduled.&lt;br /&gt;
If you are putting this in an array script, then add the Slurm array task ID to the end of the ckeckpoint directory name&lt;br /&gt;
like &amp;lt;B&amp;gt;ckptdir=ckpt-$SLURM_ARRAY_TASK_ID&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --mem=50G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS/2016.4-foss-2017beocatb-hybrid&lt;br /&gt;
  module load DMTCP&lt;br /&gt;
  module list&lt;br /&gt;
  &lt;br /&gt;
  ckptdir=ckpt&lt;br /&gt;
  mkdir -p $ckptdir&lt;br /&gt;
  export DMTCP_CHECKPOINT_DIR=$ckptdir&lt;br /&gt;
  &lt;br /&gt;
  if ! ls -1 $ckptdir | grep -c dmtcp_restart_script &amp;gt; /dev/null&lt;br /&gt;
  then&lt;br /&gt;
     echo &amp;quot;Using dmtcp_launch to start the app the first time&amp;quot;&lt;br /&gt;
     dmtcp_launch --no-coordinator mpirun -np 1 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -nsteps 50000 -ntomp 4 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  else&lt;br /&gt;
     echo &amp;quot;Using dmtcp_restart from $ckptdir to continue from a checkpoint&amp;quot;&lt;br /&gt;
     dmtcp_restart $ckptdir/*.dmtcp&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
You will need to run several tests to verify that DMTCP is working properly with your application.&lt;br /&gt;
First, run a short test without DMTCP and another with DMTCP with the checkpoint interval set to 5 minutes&lt;br /&gt;
by adding the line &amp;lt;B&amp;gt;export DMTCP_CHECKPOINT_INTERVAL=300&amp;lt;/B&amp;gt; to your script.  Then use &amp;lt;B&amp;gt;kstat -d 1&amp;lt;/B&amp;gt; to&lt;br /&gt;
check that the memory in both runs is close to the same.  Also use this information to calculate the time &lt;br /&gt;
that each checkpoint takes.  In most cases I've seen times less than a minute for checkpointing that will normally&lt;br /&gt;
be done once each hour.  If your application is taking more time, let us know.  Sometimes this can be sped up&lt;br /&gt;
by simply turning off compression by adding the line &amp;lt;B&amp;gt;export DMTCP_GZIP=0&amp;lt;/B&amp;gt;.  Make sure to remove the&lt;br /&gt;
line where you set the checkpoint interval to 300 seconds so that the default time of once per hour will be used.&lt;br /&gt;
&lt;br /&gt;
After verifying that your code completes using DMTCP and does not take significantly more time or memory, you&lt;br /&gt;
will need to start a run then &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; it after the first checkpoint, then resubmit the same script to make &lt;br /&gt;
sure that it restarts and runs to completion.  If you are working with an array job script, the last is to try a few&lt;br /&gt;
array tasks at once to make sure there is no conflict between the jobs.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;--gres=killable:1&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;scancel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, the group gets access to a &amp;quot;partition&amp;quot; giving you priority on those nodes.&lt;br /&gt;
&lt;br /&gt;
In most situations, the scheduler will automatically add those priority partitions to the jobs as submitted. You should not need to include a partition list in your job submission.&lt;br /&gt;
&lt;br /&gt;
There are currently just a few exceptions that we will not automatically add:&lt;br /&gt;
* ksu-chem-mri.q&lt;br /&gt;
* ksu-gen-gpu.q&lt;br /&gt;
* ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
If you have access to those any of the non-automatic partitions, and have need of the resources in that partition, you can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
Otherwise, you shouldn't modify the partition line at all unless you really know what you're doing.&lt;br /&gt;
&lt;br /&gt;
== Graphical Applications ==&lt;br /&gt;
Some applications are graphical and need to have some graphical input/output. We currently accomplish this with X11 forwarding or [[OpenOnDemand]]&lt;br /&gt;
=== OpenOnDemand ===&lt;br /&gt;
[[OpenOnDemand]] is likely the easier and more performant way to run a graphical application on the cluster.&lt;br /&gt;
# visit [https://ondemand.beocat.ksu.edu/ ondemand] and login with your cluster credentials.&lt;br /&gt;
# Check the &amp;quot;Interactive Apps&amp;quot; dropdown. We may have a workflow ready for you. If not choose the desktop.&lt;br /&gt;
# Select the resources you need&lt;br /&gt;
# Select launch&lt;br /&gt;
# A job is now submitted to the cluster and once the job is started you'll see a Connect button&lt;br /&gt;
# use the app as needed. If using the desktop, start your graphical application.&lt;br /&gt;
&lt;br /&gt;
=== X11 Forwarding ===&lt;br /&gt;
==== Connecting with an X11 client ====&lt;br /&gt;
===== Windows =====&lt;br /&gt;
If you are running Windows, we recommend MobaXTerm as your file/ssh manager, this is because it is one relatively simple tool to do everything. MobaXTerm also automatically connects with X11 forwarding enabled.&lt;br /&gt;
===== Linux/OSX =====&lt;br /&gt;
Both Linux and OSX can connect in an X11 forwarding mode. Linux will have all of the tools you need installed already, OSX will need [https://www.xquartz.org/ XQuartz] installed.&lt;br /&gt;
&lt;br /&gt;
Then you will need to change your 'ssh' command slightly:&lt;br /&gt;
&lt;br /&gt;
 ssh -Y eid@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
The '''-Y''' argument tells ssh to setup X11 forwarding.&lt;br /&gt;
==== Starting an Graphical job ====&lt;br /&gt;
All graphical jobs, by design, must be interactive, so we'll use the srun command. On a headnode, we run the following:&lt;br /&gt;
 # load an X11 enabled application&lt;br /&gt;
 module load Octave&lt;br /&gt;
 # start an X11 job, sbatch arguments are accepted for srun as well, 1 node, 1 hour, 1 gb of memory&lt;br /&gt;
 srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 octave --gui&lt;br /&gt;
&lt;br /&gt;
Because these jobs are interactive, they may not be able to run at all times, depending on how busy the scheduler is at any point in time. '''--pty --x11''' are required arguments setting up the job, and '''octave --gui''' is the command to run inside the job.&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Policy&amp;diff=944</id>
		<title>Policy</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Policy&amp;diff=944"/>
		<updated>2023-08-09T19:54:52Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Administrative access to home, bulk and scratch directories */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Account Eligibility ==&lt;br /&gt;
Beocat resources are available to faculty and staff at K-State, Kansas higher educational institutions (subject to the KanShare MOU), and to US academic researchers that engage in collaborative research and development activities K-State researchers. Additional supporting documentation may be required prior to accessing Beocat's resources.&lt;br /&gt;
&lt;br /&gt;
Accounts are subject to annual renewal. Upon leaving K-State your account will be automatically de-activated, but you may apply for re-activation using the standard Beocat account request form.&lt;br /&gt;
&lt;br /&gt;
== K-State Information Technology Usage Policy ==&lt;br /&gt;
As Beocat is a K-State resource, the following usage policy also applies: http://www.k-state.edu/policies/ppm/3420.html&lt;br /&gt;
&lt;br /&gt;
Please pay close attention to .030, as these are egregious violations.&lt;br /&gt;
&lt;br /&gt;
== Classified, PII/HIPAA, CUI, and/or export controlled data ==&lt;br /&gt;
Beocat is not equipped to store and/or compute on [[wikipedia:Classified_information|Classified]], &lt;br /&gt;
[[wikipedia:Controlled Unclassified Information|CUI]],&lt;br /&gt;
[[wikipedia:Personally_identifiable_information|PII]], or [[wikipedia:Health_Insurance_Portability_and_Accountability_Act|HIPAA]].&lt;br /&gt;
For this type of data we suggest the [https://www.k-state.edu/comply/cui/research-security.html K-State Research Information Security Enclave (RISE)].&lt;br /&gt;
[[wikipedia:Export_Administration_Regulations|Export controlled]] data and computation may occur on Beocat with the concurrence&lt;br /&gt;
of the appropriate KSU personnel, generally a collaboration between the researchers, Beocat staff, the Compliance Office and CISO&lt;br /&gt;
to determine appropriate safeguards are in place prior to the start of Beocat usage.&lt;br /&gt;
&lt;br /&gt;
== Access ==&lt;br /&gt;
Access to Beocat is currently prohibited from certain sensitive countries/regions, including countries designated as State Sponsors of Terrorism by the US government, and the People's Republic of China. &lt;br /&gt;
Exceptions may be possible with the agreement in writing of the appropriate K-State officials,&lt;br /&gt;
including but not limited to the Chief Compliance Officer and Chief Information Security Officer. &lt;br /&gt;
&lt;br /&gt;
== Maintenance ==&lt;br /&gt;
Beocat reserves the right to a 24 hour maintenance period every other week. However, this maintenance is not always necessary. Maintenance intentions and reservations will always be announced on the mailing list 2 weeks before an actual maintenance period is in effect.&lt;br /&gt;
&lt;br /&gt;
== Head node computational tasks ==&lt;br /&gt;
The head node serves as a shell server and development environment for Beocat users. We wish to keep this machine running responsively to make work easier. We do not have a problem with running simple post-processing work on the head node directly. But, if your process seems too computation or memory intensive, it may have its priority severely reduced or may be killed completely. If in doubt, ask.&lt;br /&gt;
&lt;br /&gt;
Due to abuses of the head node, there are now strict limits in place. If a process uses more than 4GB of RSS memory or 6GB of virtual memory, it will get killed automatically. RSS Memory is limited to 12GB across all users. CPU Usage is allocated with a fair-share algorithm, all users have equivalent access to CPU time.&lt;br /&gt;
== Backups ==&lt;br /&gt;
For those of you using our hosted virtual machines, no backups of said machines or data are made.&lt;br /&gt;
&lt;br /&gt;
At this point in time, due to the size of our main storage, we are unable to provide backups of any data.&lt;br /&gt;
&lt;br /&gt;
== Home Directory Quota ==&lt;br /&gt;
Each home directory has a quota of 1TB. If you use more that 1TB in your home directory, we will notify you and provide a window for resolving the issue. If no action is underwent, we will move data elsewhere.&lt;br /&gt;
&lt;br /&gt;
== Bulk Usage ==&lt;br /&gt;
We have no quota for usage within our /bulk filesystem, but data stored here must be paid for at a rate of $45/TB/year, billed monthly.&lt;br /&gt;
&lt;br /&gt;
== Account deactivation ==&lt;br /&gt;
If your account meets any of the following criteria:&lt;br /&gt;
* inactive for 1 year&lt;br /&gt;
* invalid e-mail address on file&lt;br /&gt;
* unsubscribed from our mailing list&lt;br /&gt;
* inactive eID&lt;br /&gt;
* lack of a K-State sponsor if you are not a K-State student or employee&lt;br /&gt;
** upon the transition from K-State student or employee to not being a K-State student or employee your account will be automatically deactivated. If you still have need of your account you will need to reapply.&lt;br /&gt;
** unfortunately, it would seem this gets triggered a week or two before the new semester if the student hasn't enrolled for the new semester yet.&lt;br /&gt;
* failed to follow up on the annual renewal process&lt;br /&gt;
&lt;br /&gt;
we will mark the account for archival, and remove your ability to login. If you should need the account again, please fill out our [https://account.beocat.ksu.edu/user account request form.]&lt;br /&gt;
&lt;br /&gt;
== Administrative access to home, bulk and fastscratch directories ==&lt;br /&gt;
Beocat support staff have the need, from time to time, to access information contained within the directories of other users, for support purposes. To that end, there is an access control list that gives them read access to user home, bulk, and fastscratch directories.&lt;br /&gt;
&lt;br /&gt;
== Directory Permissions ==&lt;br /&gt;
In order for Beocat to be in alignment with State of Kansas regulations and industry norms, all users must have their /homes /bulk /scratch and /fastscratch directories locked down from other users, but can still share files and directories within their group or with individual users using group and individual [[AdvancedSlurm#File_Sharing|ACLs (Access Control Lists)]]. Beocat staff will be exempted from this policy as we need to work freely with all users and will manage our subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
== Acknowledging Use of Beocat Resources and/or Personnel in Publications ==&lt;br /&gt;
Click [[PapersAndGrants|here]] for a list of publications that used Beocat resources and/or personnel.&lt;br /&gt;
&lt;br /&gt;
# A publication that is based in whole or in part on computations performed using Beocat systems, including but not limited to hardware, storage, networking and/or software, should incorporate the following text into the Acknowledgements section of the publication:&lt;br /&gt;
#* [Some of] The computing for this project was performed on the Beocat Research Cluster at Kansas State University, which is funded in part by NSF grants CNS-1006860, EPS-1006860, EPS-0919443, ACI-1440548, CHE-1726332, and NIH P20GM113109.&lt;br /&gt;
# If any Beocat staff member(s) assisted with the work in any way, then for each Beocat staff member that was involved in the work:&lt;br /&gt;
## If the publication includes a substantial amount of text about the work that the Beocat staff member contributed to, and if the Beocat staff member did a substantial amount of development or optimization of software, and/or they contributed significantly to the writing of the publication, then that staff member should be included as a co-author on that publication, with author order to be negotiated among the authors. &lt;br /&gt;
##; NOTE : This requirement can be waived for tenure track (but not yet tenured) faculty if the faculty member has a compelling tenure-related interest in, for example, producing single-author publications.&lt;br /&gt;
## If the conditions above don't apply, then the Beocat staff member should be acknowledged by name and job title in the Acknowledgements section of the paper. &lt;br /&gt;
##; For example : Beocat Director Daniel Andresen and Beocat Systems Administrator Adam Tygart provided valuable technical expertise. &lt;br /&gt;
# A citation for your publication should be added to our [[PapersAndGrants|papers and grants page]].&lt;br /&gt;
&lt;br /&gt;
== IRB Statement ==&lt;br /&gt;
=== INFORMATION ===&lt;br /&gt;
If you are a Beocat user, whenever you submit a job, delete a job, or otherwise interact with the&lt;br /&gt;
scheduler, automatic information about this is logged and will be used in this study. This will include&lt;br /&gt;
information about the job including requested resources (memory, processors, duration, modules, etc.).&lt;br /&gt;
We may send you a followup request for more information if, for example, you delete a job. Your&lt;br /&gt;
participation is optional.&lt;br /&gt;
=== RISKS ===&lt;br /&gt;
There are no anticipated risks with participation in this study other than the time responding to a&lt;br /&gt;
followup information request.&lt;br /&gt;
=== BENEFITS ===&lt;br /&gt;
Your participation in our studies will help us learn how to optimize the performance of Beocat and other&lt;br /&gt;
HPC resources, which will help our users and our overall science and education efforts.&lt;br /&gt;
=== CONFIDENTIALITY ===&lt;br /&gt;
All information gathered in this study will be kept confidential. All information about your jobs will not&lt;br /&gt;
use real names or eIDs, and any publications will aggregate overall information.&lt;br /&gt;
=== CONTACT ===&lt;br /&gt;
If you have any questions at any time about the study or procedures, please contact Dr. Daniel Andresen&lt;br /&gt;
at Kansas State University, Department of Computer Science: Phone: (785) 532-7914 or Email:&lt;br /&gt;
dan@ksu.edu.&lt;br /&gt;
If you feel you have not been treated according to the description in this page, or your rights as a&lt;br /&gt;
participant in research have been violated during the course of this study, you may contact the office for&lt;br /&gt;
the Kansas State University Committee on Research Involving Human Subjects, 203 Fairchild Hall, Kansas&lt;br /&gt;
State University, Manhattan, KS 66506. (785) 532-3224.&lt;br /&gt;
=== PARTICIPATION ===&lt;br /&gt;
&lt;br /&gt;
Your participation in this study is strictly voluntary; you may refuse to participate in any followup&lt;br /&gt;
surveys or withdraw your information from the study without penalty. If you decide to participate, you&lt;br /&gt;
may withdraw from the study at any time without penalty. To remove your data from use in the study,&lt;br /&gt;
contact Dr. Daniel Andresen as described above.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Policy&amp;diff=943</id>
		<title>Policy</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Policy&amp;diff=943"/>
		<updated>2023-08-09T19:54:33Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Administrative access to home, bulk and scratch directories */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Account Eligibility ==&lt;br /&gt;
Beocat resources are available to faculty and staff at K-State, Kansas higher educational institutions (subject to the KanShare MOU), and to US academic researchers that engage in collaborative research and development activities K-State researchers. Additional supporting documentation may be required prior to accessing Beocat's resources.&lt;br /&gt;
&lt;br /&gt;
Accounts are subject to annual renewal. Upon leaving K-State your account will be automatically de-activated, but you may apply for re-activation using the standard Beocat account request form.&lt;br /&gt;
&lt;br /&gt;
== K-State Information Technology Usage Policy ==&lt;br /&gt;
As Beocat is a K-State resource, the following usage policy also applies: http://www.k-state.edu/policies/ppm/3420.html&lt;br /&gt;
&lt;br /&gt;
Please pay close attention to .030, as these are egregious violations.&lt;br /&gt;
&lt;br /&gt;
== Classified, PII/HIPAA, CUI, and/or export controlled data ==&lt;br /&gt;
Beocat is not equipped to store and/or compute on [[wikipedia:Classified_information|Classified]], &lt;br /&gt;
[[wikipedia:Controlled Unclassified Information|CUI]],&lt;br /&gt;
[[wikipedia:Personally_identifiable_information|PII]], or [[wikipedia:Health_Insurance_Portability_and_Accountability_Act|HIPAA]].&lt;br /&gt;
For this type of data we suggest the [https://www.k-state.edu/comply/cui/research-security.html K-State Research Information Security Enclave (RISE)].&lt;br /&gt;
[[wikipedia:Export_Administration_Regulations|Export controlled]] data and computation may occur on Beocat with the concurrence&lt;br /&gt;
of the appropriate KSU personnel, generally a collaboration between the researchers, Beocat staff, the Compliance Office and CISO&lt;br /&gt;
to determine appropriate safeguards are in place prior to the start of Beocat usage.&lt;br /&gt;
&lt;br /&gt;
== Access ==&lt;br /&gt;
Access to Beocat is currently prohibited from certain sensitive countries/regions, including countries designated as State Sponsors of Terrorism by the US government, and the People's Republic of China. &lt;br /&gt;
Exceptions may be possible with the agreement in writing of the appropriate K-State officials,&lt;br /&gt;
including but not limited to the Chief Compliance Officer and Chief Information Security Officer. &lt;br /&gt;
&lt;br /&gt;
== Maintenance ==&lt;br /&gt;
Beocat reserves the right to a 24 hour maintenance period every other week. However, this maintenance is not always necessary. Maintenance intentions and reservations will always be announced on the mailing list 2 weeks before an actual maintenance period is in effect.&lt;br /&gt;
&lt;br /&gt;
== Head node computational tasks ==&lt;br /&gt;
The head node serves as a shell server and development environment for Beocat users. We wish to keep this machine running responsively to make work easier. We do not have a problem with running simple post-processing work on the head node directly. But, if your process seems too computation or memory intensive, it may have its priority severely reduced or may be killed completely. If in doubt, ask.&lt;br /&gt;
&lt;br /&gt;
Due to abuses of the head node, there are now strict limits in place. If a process uses more than 4GB of RSS memory or 6GB of virtual memory, it will get killed automatically. RSS Memory is limited to 12GB across all users. CPU Usage is allocated with a fair-share algorithm, all users have equivalent access to CPU time.&lt;br /&gt;
== Backups ==&lt;br /&gt;
For those of you using our hosted virtual machines, no backups of said machines or data are made.&lt;br /&gt;
&lt;br /&gt;
At this point in time, due to the size of our main storage, we are unable to provide backups of any data.&lt;br /&gt;
&lt;br /&gt;
== Home Directory Quota ==&lt;br /&gt;
Each home directory has a quota of 1TB. If you use more that 1TB in your home directory, we will notify you and provide a window for resolving the issue. If no action is underwent, we will move data elsewhere.&lt;br /&gt;
&lt;br /&gt;
== Bulk Usage ==&lt;br /&gt;
We have no quota for usage within our /bulk filesystem, but data stored here must be paid for at a rate of $45/TB/year, billed monthly.&lt;br /&gt;
&lt;br /&gt;
== Account deactivation ==&lt;br /&gt;
If your account meets any of the following criteria:&lt;br /&gt;
* inactive for 1 year&lt;br /&gt;
* invalid e-mail address on file&lt;br /&gt;
* unsubscribed from our mailing list&lt;br /&gt;
* inactive eID&lt;br /&gt;
* lack of a K-State sponsor if you are not a K-State student or employee&lt;br /&gt;
** upon the transition from K-State student or employee to not being a K-State student or employee your account will be automatically deactivated. If you still have need of your account you will need to reapply.&lt;br /&gt;
** unfortunately, it would seem this gets triggered a week or two before the new semester if the student hasn't enrolled for the new semester yet.&lt;br /&gt;
* failed to follow up on the annual renewal process&lt;br /&gt;
&lt;br /&gt;
we will mark the account for archival, and remove your ability to login. If you should need the account again, please fill out our [https://account.beocat.ksu.edu/user account request form.]&lt;br /&gt;
&lt;br /&gt;
== Administrative access to home, bulk and scratch directories ==&lt;br /&gt;
Beocat support staff have the need, from time to time, to access information contained within the directories of other users, for support purposes. To that end, there is an access control list that gives them read access to user home, bulk, and fastscratch directories.&lt;br /&gt;
&lt;br /&gt;
== Directory Permissions ==&lt;br /&gt;
In order for Beocat to be in alignment with State of Kansas regulations and industry norms, all users must have their /homes /bulk /scratch and /fastscratch directories locked down from other users, but can still share files and directories within their group or with individual users using group and individual [[AdvancedSlurm#File_Sharing|ACLs (Access Control Lists)]]. Beocat staff will be exempted from this policy as we need to work freely with all users and will manage our subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
== Acknowledging Use of Beocat Resources and/or Personnel in Publications ==&lt;br /&gt;
Click [[PapersAndGrants|here]] for a list of publications that used Beocat resources and/or personnel.&lt;br /&gt;
&lt;br /&gt;
# A publication that is based in whole or in part on computations performed using Beocat systems, including but not limited to hardware, storage, networking and/or software, should incorporate the following text into the Acknowledgements section of the publication:&lt;br /&gt;
#* [Some of] The computing for this project was performed on the Beocat Research Cluster at Kansas State University, which is funded in part by NSF grants CNS-1006860, EPS-1006860, EPS-0919443, ACI-1440548, CHE-1726332, and NIH P20GM113109.&lt;br /&gt;
# If any Beocat staff member(s) assisted with the work in any way, then for each Beocat staff member that was involved in the work:&lt;br /&gt;
## If the publication includes a substantial amount of text about the work that the Beocat staff member contributed to, and if the Beocat staff member did a substantial amount of development or optimization of software, and/or they contributed significantly to the writing of the publication, then that staff member should be included as a co-author on that publication, with author order to be negotiated among the authors. &lt;br /&gt;
##; NOTE : This requirement can be waived for tenure track (but not yet tenured) faculty if the faculty member has a compelling tenure-related interest in, for example, producing single-author publications.&lt;br /&gt;
## If the conditions above don't apply, then the Beocat staff member should be acknowledged by name and job title in the Acknowledgements section of the paper. &lt;br /&gt;
##; For example : Beocat Director Daniel Andresen and Beocat Systems Administrator Adam Tygart provided valuable technical expertise. &lt;br /&gt;
# A citation for your publication should be added to our [[PapersAndGrants|papers and grants page]].&lt;br /&gt;
&lt;br /&gt;
== IRB Statement ==&lt;br /&gt;
=== INFORMATION ===&lt;br /&gt;
If you are a Beocat user, whenever you submit a job, delete a job, or otherwise interact with the&lt;br /&gt;
scheduler, automatic information about this is logged and will be used in this study. This will include&lt;br /&gt;
information about the job including requested resources (memory, processors, duration, modules, etc.).&lt;br /&gt;
We may send you a followup request for more information if, for example, you delete a job. Your&lt;br /&gt;
participation is optional.&lt;br /&gt;
=== RISKS ===&lt;br /&gt;
There are no anticipated risks with participation in this study other than the time responding to a&lt;br /&gt;
followup information request.&lt;br /&gt;
=== BENEFITS ===&lt;br /&gt;
Your participation in our studies will help us learn how to optimize the performance of Beocat and other&lt;br /&gt;
HPC resources, which will help our users and our overall science and education efforts.&lt;br /&gt;
=== CONFIDENTIALITY ===&lt;br /&gt;
All information gathered in this study will be kept confidential. All information about your jobs will not&lt;br /&gt;
use real names or eIDs, and any publications will aggregate overall information.&lt;br /&gt;
=== CONTACT ===&lt;br /&gt;
If you have any questions at any time about the study or procedures, please contact Dr. Daniel Andresen&lt;br /&gt;
at Kansas State University, Department of Computer Science: Phone: (785) 532-7914 or Email:&lt;br /&gt;
dan@ksu.edu.&lt;br /&gt;
If you feel you have not been treated according to the description in this page, or your rights as a&lt;br /&gt;
participant in research have been violated during the course of this study, you may contact the office for&lt;br /&gt;
the Kansas State University Committee on Research Involving Human Subjects, 203 Fairchild Hall, Kansas&lt;br /&gt;
State University, Manhattan, KS 66506. (785) 532-3224.&lt;br /&gt;
=== PARTICIPATION ===&lt;br /&gt;
&lt;br /&gt;
Your participation in this study is strictly voluntary; you may refuse to participate in any followup&lt;br /&gt;
surveys or withdraw your information from the study without penalty. If you decide to participate, you&lt;br /&gt;
may withdraw from the study at any time without penalty. To remove your data from use in the study,&lt;br /&gt;
contact Dr. Daniel Andresen as described above.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=942</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=942"/>
		<updated>2023-08-09T19:07:50Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Fast Scratch file system */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. 'kstat -g' will show you the GPU nodes and the jobs running on them.  To request a GPU node, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; for example to request 1 GPU for your job; if your job uses multiple nodes, the number of GPUs requested is per-node.  You can also request a given type of GPU (kstat -g -l to show types) by using &amp;lt;tt&amp;gt;--gres=gpu:geforce_gtx_1080_ti:1&amp;lt;/tt&amp;gt; for a 1080Ti GPU on the Wizards or Dwarves, &amp;lt;tt&amp;gt;--gres=gpu:quadro_gp100:1&amp;lt;/tt&amp;gt; for the P100 GPUs on Wizard20-21 that are best for 64-bit codes like Vasp.  Most of these GPU nodes are owned by various groups.  If you want access to GPU nodes and your group does not own any, we can add you to the &amp;lt;tt&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/tt&amp;gt; group that has priority on Dwarf36-39.  For more information on compiling CUDA code click on this [[CUDA]] link.&lt;br /&gt;
&lt;br /&gt;
A listing of the current types of gpus can be gathered with this command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
scontrol show nodes | grep CfgTRES | tr ',' '\n' | awk -F '[:=]' '/gres\/gpu:/ { print $2 }' | sort -u&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
At the time of this writing, that command produces this list:&lt;br /&gt;
* geforce_gtx_1080_ti&lt;br /&gt;
* geforce_rtx_2080_ti&lt;br /&gt;
* geforce_rtx_3090&lt;br /&gt;
* quadro_gp100&lt;br /&gt;
* rtx_a4000&lt;br /&gt;
&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
''Intra''node jobs run on many cores in the same node. These jobs can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or any programming language that has the concept of ''threads''. Often, your program will need to know how many cores you want it to use, and many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--nodes=1 --cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
''Inter''node jobs can utilize many cores on one or more nodes. Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--nodes=''n'' --ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=10 --ntasks=100&amp;lt;/tt&amp;gt; will give you a total of 100 cores across 10 nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=avx2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## There is one strict rule for guaranteeing Slurm reads all of your options:&lt;br /&gt;
## Do not put *any* lines above your resource requests that aren't either:&lt;br /&gt;
##    1) blank. (no other characters)&lt;br /&gt;
##    2) comments (lines must begin with '#')&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres=gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocat@cs.ksu.edu to see how we can assist in&lt;br /&gt;
## getting your job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance.  Those needing more storage may purchase /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up.  The /scratch filesystem provides a temporary space to store intermediary files that are needed for multiple jobs, or for files that are larger than your home directory. The /fastscratch file system is a zfs host with lots of NVME drives provide much faster&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to /fastscratch, the local disk on each node, or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in a purchased /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Due to the cost, directories will be provided when we are contacted and provided with payment information.&lt;br /&gt;
&lt;br /&gt;
===Fast Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /fastscratch file system is faster than /bulk or /homes.&lt;br /&gt;
In order to use fastscratch, you first need to make a directory for yourself.  &lt;br /&gt;
Fast Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the fastscratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /fastscratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  When we get the scratch file system working with Lustre, it may&lt;br /&gt;
end up being faster than accessing local disk so we will post the access rates for each.  Use 'kstat -l -h'&lt;br /&gt;
to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&lt;br /&gt;
# Once the move is complete, the Supervisor should limit the permissions for the directory again by removing the student's access:&lt;br /&gt;
chown $USER: -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
In the past, Beocat users have been allowed to keep their&lt;br /&gt;
/homes and /bulk directories open so that any other user could&lt;br /&gt;
access files.  In order to bring Beocat into alignment with&lt;br /&gt;
State of Kansas regulations and industry norms, all users must now have their /homes /bulk /scratch and /fastscratch directories&lt;br /&gt;
locked down from other users, but can still share files and directories within their group or with individual users&lt;br /&gt;
using group and individual ACLs (Access Control Lists) which will be explained below.&lt;br /&gt;
Beocat staff will be exempted from this&lt;br /&gt;
policy as we need to work freely with all users and will manage our&lt;br /&gt;
subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory with the setacls script===&lt;br /&gt;
&lt;br /&gt;
If you do not wish to share files or directories with other users, you do not need to do anything&lt;br /&gt;
as rwx access to others has already been removed.&lt;br /&gt;
If you want to share files or directories you can either use the **setacls** script or configure&lt;br /&gt;
the ACLs (Access Control Lists) manually.&lt;br /&gt;
&lt;br /&gt;
The '''setacls -h''' will show how to use the script.&lt;br /&gt;
  &lt;br /&gt;
  Eos: setacls -h&lt;br /&gt;
  setacls [-r] [-w] [-g group] [-u user] -d /full/path/to/directory&lt;br /&gt;
  Execute pemission will always be applied, you may also choose r or w&lt;br /&gt;
  Must specify at least one group or user&lt;br /&gt;
  Must specify at least one directory, and it must be the full path&lt;br /&gt;
  Example: setacls -r -g ksu-cis-hpc -u mozes -d /homes/daveturner/shared_dir&lt;br /&gt;
&lt;br /&gt;
You can specify the permissions to be either -r for read or -w for write or you can specify both.&lt;br /&gt;
You can provide a priority group to share with, which is the same as the group used in a --partition=&lt;br /&gt;
statement in a job submission script.  You can also specify users.&lt;br /&gt;
You can specify a file or a directory to share.  If the directory is specified then all files in that&lt;br /&gt;
directory will also be shared, and all files created in the directory laster will also be shared.&lt;br /&gt;
&lt;br /&gt;
The script will set everything up for you, telling you the commands it is executing along the way,&lt;br /&gt;
then show the resulting ACLs at the end with the '''getfacl''' command.&lt;br /&gt;
&lt;br /&gt;
====Manually configuring your ACLs====&lt;br /&gt;
&lt;br /&gt;
If you want to manually configure the ACLs you can use the directions below to do what the **setacls** &lt;br /&gt;
script would do for you.&lt;br /&gt;
You first need to provide the minimum execute access to your /homes&lt;br /&gt;
or /bulk directory before sharing individual subdirectories.  Setting the ACL to execute only will allow those &lt;br /&gt;
in your group to get access to subdirectories while not including read access will mean they will not&lt;br /&gt;
be able to see other files or subdirectories on your main directory, but do keep in mind that they can still access them&lt;br /&gt;
so you may want to still lock them down manually.  Below is an example of how I would change my&lt;br /&gt;
/homes/daveturner directory to allow ksu-cis-hpc group execute access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:X /homes/daveturner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your research group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share_hpc', &lt;br /&gt;
then providing access to my ksu-cis.hpc group&lt;br /&gt;
(my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q).&lt;br /&gt;
Using -R will make these changes recursively to all files and directories in that subdirectory while changing the defaults with the setfacl -d command will ensure that files and directories created&lt;br /&gt;
later will be done so with these same ACLs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc&lt;br /&gt;
# ACLs are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share_hpc' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:ksu-cis-hpc:rwX -R share_hpc'&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
groups&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself&lt;br /&gt;
by emailing us at&lt;br /&gt;
beocat@cs.ksu.edu&lt;br /&gt;
.&lt;br /&gt;
If you want to share a directory with only a few people you can manage your ACLs using individual usernames&lt;br /&gt;
instead of with a group.&lt;br /&gt;
&lt;br /&gt;
You can use the '''getfacl''' command to see groups have access to a given directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
getfacl share_hpc&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ACLs give you great flexibility in controlling file access at the&lt;br /&gt;
group level.  Below is a more advanced example where I set up a directory to be shared with&lt;br /&gt;
my ksu-cis-hpc group, Dan's ksu-cis-dan group, and an individual user 'mozes' who I also want&lt;br /&gt;
to have write access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc_dan_mozes&lt;br /&gt;
# acls are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
getfacl share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc_dan_mozes&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  user:mozes:rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  group:ksu-cis-dan:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:user:mozes:rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:group:ksu-cis-dan:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::--x&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
cd&lt;br /&gt;
mkdir public_html&lt;br /&gt;
# Opt-in to letting the webserver access your home directory:&lt;br /&gt;
setfacl -m g:public_html:x ~/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
We have a page here dedicated to [[Globus]]&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --array specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint/Restart using DMTCP ==&lt;br /&gt;
&lt;br /&gt;
DMTCP is Distributed Multi-Threaded CheckPoint software that will checkpoint your application without modification, and&lt;br /&gt;
can be set up to automatically restart your job from the last checkpoint if for example the node you are running on fails.  &lt;br /&gt;
This has been tested successfully&lt;br /&gt;
on Beocat for some scalar and OpenMP codes, but has failed on all MPI tests so far.  We would like to encourage users to&lt;br /&gt;
try DMTCP out if their non-MPI jobs run longer than 24 hours.  If you want to try this, please contact us first since we are still&lt;br /&gt;
experimenting with DMTCP.&lt;br /&gt;
&lt;br /&gt;
The sample job submission script below shows how dmtcp_launch is used to start the application, then dmtcp_restart is used to start from a checkpoint if the job has failed and been rescheduled.&lt;br /&gt;
If you are putting this in an array script, then add the Slurm array task ID to the end of the ckeckpoint directory name&lt;br /&gt;
like &amp;lt;B&amp;gt;ckptdir=ckpt-$SLURM_ARRAY_TASK_ID&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --mem=50G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS/2016.4-foss-2017beocatb-hybrid&lt;br /&gt;
  module load DMTCP&lt;br /&gt;
  module list&lt;br /&gt;
  &lt;br /&gt;
  ckptdir=ckpt&lt;br /&gt;
  mkdir -p $ckptdir&lt;br /&gt;
  export DMTCP_CHECKPOINT_DIR=$ckptdir&lt;br /&gt;
  &lt;br /&gt;
  if ! ls -1 $ckptdir | grep -c dmtcp_restart_script &amp;gt; /dev/null&lt;br /&gt;
  then&lt;br /&gt;
     echo &amp;quot;Using dmtcp_launch to start the app the first time&amp;quot;&lt;br /&gt;
     dmtcp_launch --no-coordinator mpirun -np 1 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -nsteps 50000 -ntomp 4 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  else&lt;br /&gt;
     echo &amp;quot;Using dmtcp_restart from $ckptdir to continue from a checkpoint&amp;quot;&lt;br /&gt;
     dmtcp_restart $ckptdir/*.dmtcp&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
You will need to run several tests to verify that DMTCP is working properly with your application.&lt;br /&gt;
First, run a short test without DMTCP and another with DMTCP with the checkpoint interval set to 5 minutes&lt;br /&gt;
by adding the line &amp;lt;B&amp;gt;export DMTCP_CHECKPOINT_INTERVAL=300&amp;lt;/B&amp;gt; to your script.  Then use &amp;lt;B&amp;gt;kstat -d 1&amp;lt;/B&amp;gt; to&lt;br /&gt;
check that the memory in both runs is close to the same.  Also use this information to calculate the time &lt;br /&gt;
that each checkpoint takes.  In most cases I've seen times less than a minute for checkpointing that will normally&lt;br /&gt;
be done once each hour.  If your application is taking more time, let us know.  Sometimes this can be sped up&lt;br /&gt;
by simply turning off compression by adding the line &amp;lt;B&amp;gt;export DMTCP_GZIP=0&amp;lt;/B&amp;gt;.  Make sure to remove the&lt;br /&gt;
line where you set the checkpoint interval to 300 seconds so that the default time of once per hour will be used.&lt;br /&gt;
&lt;br /&gt;
After verifying that your code completes using DMTCP and does not take significantly more time or memory, you&lt;br /&gt;
will need to start a run then &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; it after the first checkpoint, then resubmit the same script to make &lt;br /&gt;
sure that it restarts and runs to completion.  If you are working with an array job script, the last is to try a few&lt;br /&gt;
array tasks at once to make sure there is no conflict between the jobs.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;--gres=killable:1&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;scancel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, the group gets access to a &amp;quot;partition&amp;quot; giving you priority on those nodes.&lt;br /&gt;
&lt;br /&gt;
In most situations, the scheduler will automatically add those priority partitions to the jobs as submitted. You should not need to include a partition list in your job submission.&lt;br /&gt;
&lt;br /&gt;
There are currently just a few exceptions that we will not automatically add:&lt;br /&gt;
* ksu-chem-mri.q&lt;br /&gt;
* ksu-gen-gpu.q&lt;br /&gt;
* ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
If you have access to those any of the non-automatic partitions, and have need of the resources in that partition, you can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
Otherwise, you shouldn't modify the partition line at all unless you really know what you're doing.&lt;br /&gt;
&lt;br /&gt;
== Graphical Applications ==&lt;br /&gt;
Some applications are graphical and need to have some graphical input/output. We currently accomplish this with X11 forwarding or [[OpenOnDemand]]&lt;br /&gt;
=== OpenOnDemand ===&lt;br /&gt;
[[OpenOnDemand]] is likely the easier and more performant way to run a graphical application on the cluster.&lt;br /&gt;
# visit [https://ondemand.beocat.ksu.edu/ ondemand] and login with your cluster credentials.&lt;br /&gt;
# Check the &amp;quot;Interactive Apps&amp;quot; dropdown. We may have a workflow ready for you. If not choose the desktop.&lt;br /&gt;
# Select the resources you need&lt;br /&gt;
# Select launch&lt;br /&gt;
# A job is now submitted to the cluster and once the job is started you'll see a Connect button&lt;br /&gt;
# use the app as needed. If using the desktop, start your graphical application.&lt;br /&gt;
&lt;br /&gt;
=== X11 Forwarding ===&lt;br /&gt;
==== Connecting with an X11 client ====&lt;br /&gt;
===== Windows =====&lt;br /&gt;
If you are running Windows, we recommend MobaXTerm as your file/ssh manager, this is because it is one relatively simple tool to do everything. MobaXTerm also automatically connects with X11 forwarding enabled.&lt;br /&gt;
===== Linux/OSX =====&lt;br /&gt;
Both Linux and OSX can connect in an X11 forwarding mode. Linux will have all of the tools you need installed already, OSX will need [https://www.xquartz.org/ XQuartz] installed.&lt;br /&gt;
&lt;br /&gt;
Then you will need to change your 'ssh' command slightly:&lt;br /&gt;
&lt;br /&gt;
 ssh -Y eid@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
The '''-Y''' argument tells ssh to setup X11 forwarding.&lt;br /&gt;
==== Starting an Graphical job ====&lt;br /&gt;
All graphical jobs, by design, must be interactive, so we'll use the srun command. On a headnode, we run the following:&lt;br /&gt;
 # load an X11 enabled application&lt;br /&gt;
 module load Octave&lt;br /&gt;
 # start an X11 job, sbatch arguments are accepted for srun as well, 1 node, 1 hour, 1 gb of memory&lt;br /&gt;
 srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 octave --gui&lt;br /&gt;
&lt;br /&gt;
Because these jobs are interactive, they may not be able to run at all times, depending on how busy the scheduler is at any point in time. '''--pty --x11''' are required arguments setting up the job, and '''octave --gui''' is the command to run inside the job.&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=941</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=941"/>
		<updated>2023-08-09T19:07:27Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* File Access */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. 'kstat -g' will show you the GPU nodes and the jobs running on them.  To request a GPU node, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; for example to request 1 GPU for your job; if your job uses multiple nodes, the number of GPUs requested is per-node.  You can also request a given type of GPU (kstat -g -l to show types) by using &amp;lt;tt&amp;gt;--gres=gpu:geforce_gtx_1080_ti:1&amp;lt;/tt&amp;gt; for a 1080Ti GPU on the Wizards or Dwarves, &amp;lt;tt&amp;gt;--gres=gpu:quadro_gp100:1&amp;lt;/tt&amp;gt; for the P100 GPUs on Wizard20-21 that are best for 64-bit codes like Vasp.  Most of these GPU nodes are owned by various groups.  If you want access to GPU nodes and your group does not own any, we can add you to the &amp;lt;tt&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/tt&amp;gt; group that has priority on Dwarf36-39.  For more information on compiling CUDA code click on this [[CUDA]] link.&lt;br /&gt;
&lt;br /&gt;
A listing of the current types of gpus can be gathered with this command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
scontrol show nodes | grep CfgTRES | tr ',' '\n' | awk -F '[:=]' '/gres\/gpu:/ { print $2 }' | sort -u&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
At the time of this writing, that command produces this list:&lt;br /&gt;
* geforce_gtx_1080_ti&lt;br /&gt;
* geforce_rtx_2080_ti&lt;br /&gt;
* geforce_rtx_3090&lt;br /&gt;
* quadro_gp100&lt;br /&gt;
* rtx_a4000&lt;br /&gt;
&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
''Intra''node jobs run on many cores in the same node. These jobs can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or any programming language that has the concept of ''threads''. Often, your program will need to know how many cores you want it to use, and many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--nodes=1 --cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
''Inter''node jobs can utilize many cores on one or more nodes. Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--nodes=''n'' --ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=10 --ntasks=100&amp;lt;/tt&amp;gt; will give you a total of 100 cores across 10 nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=avx2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## There is one strict rule for guaranteeing Slurm reads all of your options:&lt;br /&gt;
## Do not put *any* lines above your resource requests that aren't either:&lt;br /&gt;
##    1) blank. (no other characters)&lt;br /&gt;
##    2) comments (lines must begin with '#')&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres=gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocat@cs.ksu.edu to see how we can assist in&lt;br /&gt;
## getting your job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance.  Those needing more storage may purchase /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up.  The /scratch filesystem provides a temporary space to store intermediary files that are needed for multiple jobs, or for files that are larger than your home directory. The /fastscratch file system is a zfs host with lots of NVME drives provide much faster&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to /fastscratch, the local disk on each node, or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in a purchased /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Due to the cost, directories will be provided when we are contacted and provided with payment information.&lt;br /&gt;
&lt;br /&gt;
===Fast Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /fastscratch file system is faster than /bulk or /homes or /scratch.&lt;br /&gt;
In order to use fastscratch, you first need to make a directory for yourself.  &lt;br /&gt;
Fast Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the fastscratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /fastscratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  When we get the scratch file system working with Lustre, it may&lt;br /&gt;
end up being faster than accessing local disk so we will post the access rates for each.  Use 'kstat -l -h'&lt;br /&gt;
to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&lt;br /&gt;
# Once the move is complete, the Supervisor should limit the permissions for the directory again by removing the student's access:&lt;br /&gt;
chown $USER: -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
In the past, Beocat users have been allowed to keep their&lt;br /&gt;
/homes and /bulk directories open so that any other user could&lt;br /&gt;
access files.  In order to bring Beocat into alignment with&lt;br /&gt;
State of Kansas regulations and industry norms, all users must now have their /homes /bulk /scratch and /fastscratch directories&lt;br /&gt;
locked down from other users, but can still share files and directories within their group or with individual users&lt;br /&gt;
using group and individual ACLs (Access Control Lists) which will be explained below.&lt;br /&gt;
Beocat staff will be exempted from this&lt;br /&gt;
policy as we need to work freely with all users and will manage our&lt;br /&gt;
subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory with the setacls script===&lt;br /&gt;
&lt;br /&gt;
If you do not wish to share files or directories with other users, you do not need to do anything&lt;br /&gt;
as rwx access to others has already been removed.&lt;br /&gt;
If you want to share files or directories you can either use the **setacls** script or configure&lt;br /&gt;
the ACLs (Access Control Lists) manually.&lt;br /&gt;
&lt;br /&gt;
The '''setacls -h''' will show how to use the script.&lt;br /&gt;
  &lt;br /&gt;
  Eos: setacls -h&lt;br /&gt;
  setacls [-r] [-w] [-g group] [-u user] -d /full/path/to/directory&lt;br /&gt;
  Execute pemission will always be applied, you may also choose r or w&lt;br /&gt;
  Must specify at least one group or user&lt;br /&gt;
  Must specify at least one directory, and it must be the full path&lt;br /&gt;
  Example: setacls -r -g ksu-cis-hpc -u mozes -d /homes/daveturner/shared_dir&lt;br /&gt;
&lt;br /&gt;
You can specify the permissions to be either -r for read or -w for write or you can specify both.&lt;br /&gt;
You can provide a priority group to share with, which is the same as the group used in a --partition=&lt;br /&gt;
statement in a job submission script.  You can also specify users.&lt;br /&gt;
You can specify a file or a directory to share.  If the directory is specified then all files in that&lt;br /&gt;
directory will also be shared, and all files created in the directory laster will also be shared.&lt;br /&gt;
&lt;br /&gt;
The script will set everything up for you, telling you the commands it is executing along the way,&lt;br /&gt;
then show the resulting ACLs at the end with the '''getfacl''' command.&lt;br /&gt;
&lt;br /&gt;
====Manually configuring your ACLs====&lt;br /&gt;
&lt;br /&gt;
If you want to manually configure the ACLs you can use the directions below to do what the **setacls** &lt;br /&gt;
script would do for you.&lt;br /&gt;
You first need to provide the minimum execute access to your /homes&lt;br /&gt;
or /bulk directory before sharing individual subdirectories.  Setting the ACL to execute only will allow those &lt;br /&gt;
in your group to get access to subdirectories while not including read access will mean they will not&lt;br /&gt;
be able to see other files or subdirectories on your main directory, but do keep in mind that they can still access them&lt;br /&gt;
so you may want to still lock them down manually.  Below is an example of how I would change my&lt;br /&gt;
/homes/daveturner directory to allow ksu-cis-hpc group execute access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:X /homes/daveturner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your research group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share_hpc', &lt;br /&gt;
then providing access to my ksu-cis.hpc group&lt;br /&gt;
(my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q).&lt;br /&gt;
Using -R will make these changes recursively to all files and directories in that subdirectory while changing the defaults with the setfacl -d command will ensure that files and directories created&lt;br /&gt;
later will be done so with these same ACLs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc&lt;br /&gt;
# ACLs are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share_hpc' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:ksu-cis-hpc:rwX -R share_hpc'&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
groups&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself&lt;br /&gt;
by emailing us at&lt;br /&gt;
beocat@cs.ksu.edu&lt;br /&gt;
.&lt;br /&gt;
If you want to share a directory with only a few people you can manage your ACLs using individual usernames&lt;br /&gt;
instead of with a group.&lt;br /&gt;
&lt;br /&gt;
You can use the '''getfacl''' command to see groups have access to a given directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
getfacl share_hpc&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ACLs give you great flexibility in controlling file access at the&lt;br /&gt;
group level.  Below is a more advanced example where I set up a directory to be shared with&lt;br /&gt;
my ksu-cis-hpc group, Dan's ksu-cis-dan group, and an individual user 'mozes' who I also want&lt;br /&gt;
to have write access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc_dan_mozes&lt;br /&gt;
# acls are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
getfacl share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc_dan_mozes&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  user:mozes:rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  group:ksu-cis-dan:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:user:mozes:rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:group:ksu-cis-dan:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::--x&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
cd&lt;br /&gt;
mkdir public_html&lt;br /&gt;
# Opt-in to letting the webserver access your home directory:&lt;br /&gt;
setfacl -m g:public_html:x ~/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
We have a page here dedicated to [[Globus]]&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --array specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint/Restart using DMTCP ==&lt;br /&gt;
&lt;br /&gt;
DMTCP is Distributed Multi-Threaded CheckPoint software that will checkpoint your application without modification, and&lt;br /&gt;
can be set up to automatically restart your job from the last checkpoint if for example the node you are running on fails.  &lt;br /&gt;
This has been tested successfully&lt;br /&gt;
on Beocat for some scalar and OpenMP codes, but has failed on all MPI tests so far.  We would like to encourage users to&lt;br /&gt;
try DMTCP out if their non-MPI jobs run longer than 24 hours.  If you want to try this, please contact us first since we are still&lt;br /&gt;
experimenting with DMTCP.&lt;br /&gt;
&lt;br /&gt;
The sample job submission script below shows how dmtcp_launch is used to start the application, then dmtcp_restart is used to start from a checkpoint if the job has failed and been rescheduled.&lt;br /&gt;
If you are putting this in an array script, then add the Slurm array task ID to the end of the ckeckpoint directory name&lt;br /&gt;
like &amp;lt;B&amp;gt;ckptdir=ckpt-$SLURM_ARRAY_TASK_ID&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --mem=50G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS/2016.4-foss-2017beocatb-hybrid&lt;br /&gt;
  module load DMTCP&lt;br /&gt;
  module list&lt;br /&gt;
  &lt;br /&gt;
  ckptdir=ckpt&lt;br /&gt;
  mkdir -p $ckptdir&lt;br /&gt;
  export DMTCP_CHECKPOINT_DIR=$ckptdir&lt;br /&gt;
  &lt;br /&gt;
  if ! ls -1 $ckptdir | grep -c dmtcp_restart_script &amp;gt; /dev/null&lt;br /&gt;
  then&lt;br /&gt;
     echo &amp;quot;Using dmtcp_launch to start the app the first time&amp;quot;&lt;br /&gt;
     dmtcp_launch --no-coordinator mpirun -np 1 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -nsteps 50000 -ntomp 4 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  else&lt;br /&gt;
     echo &amp;quot;Using dmtcp_restart from $ckptdir to continue from a checkpoint&amp;quot;&lt;br /&gt;
     dmtcp_restart $ckptdir/*.dmtcp&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
You will need to run several tests to verify that DMTCP is working properly with your application.&lt;br /&gt;
First, run a short test without DMTCP and another with DMTCP with the checkpoint interval set to 5 minutes&lt;br /&gt;
by adding the line &amp;lt;B&amp;gt;export DMTCP_CHECKPOINT_INTERVAL=300&amp;lt;/B&amp;gt; to your script.  Then use &amp;lt;B&amp;gt;kstat -d 1&amp;lt;/B&amp;gt; to&lt;br /&gt;
check that the memory in both runs is close to the same.  Also use this information to calculate the time &lt;br /&gt;
that each checkpoint takes.  In most cases I've seen times less than a minute for checkpointing that will normally&lt;br /&gt;
be done once each hour.  If your application is taking more time, let us know.  Sometimes this can be sped up&lt;br /&gt;
by simply turning off compression by adding the line &amp;lt;B&amp;gt;export DMTCP_GZIP=0&amp;lt;/B&amp;gt;.  Make sure to remove the&lt;br /&gt;
line where you set the checkpoint interval to 300 seconds so that the default time of once per hour will be used.&lt;br /&gt;
&lt;br /&gt;
After verifying that your code completes using DMTCP and does not take significantly more time or memory, you&lt;br /&gt;
will need to start a run then &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; it after the first checkpoint, then resubmit the same script to make &lt;br /&gt;
sure that it restarts and runs to completion.  If you are working with an array job script, the last is to try a few&lt;br /&gt;
array tasks at once to make sure there is no conflict between the jobs.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;--gres=killable:1&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;scancel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, the group gets access to a &amp;quot;partition&amp;quot; giving you priority on those nodes.&lt;br /&gt;
&lt;br /&gt;
In most situations, the scheduler will automatically add those priority partitions to the jobs as submitted. You should not need to include a partition list in your job submission.&lt;br /&gt;
&lt;br /&gt;
There are currently just a few exceptions that we will not automatically add:&lt;br /&gt;
* ksu-chem-mri.q&lt;br /&gt;
* ksu-gen-gpu.q&lt;br /&gt;
* ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
If you have access to those any of the non-automatic partitions, and have need of the resources in that partition, you can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
Otherwise, you shouldn't modify the partition line at all unless you really know what you're doing.&lt;br /&gt;
&lt;br /&gt;
== Graphical Applications ==&lt;br /&gt;
Some applications are graphical and need to have some graphical input/output. We currently accomplish this with X11 forwarding or [[OpenOnDemand]]&lt;br /&gt;
=== OpenOnDemand ===&lt;br /&gt;
[[OpenOnDemand]] is likely the easier and more performant way to run a graphical application on the cluster.&lt;br /&gt;
# visit [https://ondemand.beocat.ksu.edu/ ondemand] and login with your cluster credentials.&lt;br /&gt;
# Check the &amp;quot;Interactive Apps&amp;quot; dropdown. We may have a workflow ready for you. If not choose the desktop.&lt;br /&gt;
# Select the resources you need&lt;br /&gt;
# Select launch&lt;br /&gt;
# A job is now submitted to the cluster and once the job is started you'll see a Connect button&lt;br /&gt;
# use the app as needed. If using the desktop, start your graphical application.&lt;br /&gt;
&lt;br /&gt;
=== X11 Forwarding ===&lt;br /&gt;
==== Connecting with an X11 client ====&lt;br /&gt;
===== Windows =====&lt;br /&gt;
If you are running Windows, we recommend MobaXTerm as your file/ssh manager, this is because it is one relatively simple tool to do everything. MobaXTerm also automatically connects with X11 forwarding enabled.&lt;br /&gt;
===== Linux/OSX =====&lt;br /&gt;
Both Linux and OSX can connect in an X11 forwarding mode. Linux will have all of the tools you need installed already, OSX will need [https://www.xquartz.org/ XQuartz] installed.&lt;br /&gt;
&lt;br /&gt;
Then you will need to change your 'ssh' command slightly:&lt;br /&gt;
&lt;br /&gt;
 ssh -Y eid@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
The '''-Y''' argument tells ssh to setup X11 forwarding.&lt;br /&gt;
==== Starting an Graphical job ====&lt;br /&gt;
All graphical jobs, by design, must be interactive, so we'll use the srun command. On a headnode, we run the following:&lt;br /&gt;
 # load an X11 enabled application&lt;br /&gt;
 module load Octave&lt;br /&gt;
 # start an X11 job, sbatch arguments are accepted for srun as well, 1 node, 1 hour, 1 gb of memory&lt;br /&gt;
 srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 octave --gui&lt;br /&gt;
&lt;br /&gt;
Because these jobs are interactive, they may not be able to run at all times, depending on how busy the scheduler is at any point in time. '''--pty --x11''' are required arguments setting up the job, and '''octave --gui''' is the command to run inside the job.&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=940</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=940"/>
		<updated>2023-08-09T15:42:46Z</updated>

		<summary type="html">&lt;p&gt;Mozes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; fosscuda:    GNU Compiler Collection (GCC) based compiler toolchain based on FOSS with CUDA support.&lt;br /&gt;
; gmvapich2:    GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support. '''DEPRECATED'''&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; goolfc:    GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. '''DEPRECATED'''&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java/'&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://virtualenv.pypa.io/en/latest/user_guide.html virtualenv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that &amp;lt;code&amp;gt;virtualenv --help&amp;lt;/code&amp;gt; has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
virtualenv --system-site-packages test&lt;br /&gt;
# or you could use 'virtualenv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://virtualenv.pypa.io/en/stable/userguide/ virtualenv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
virtualenv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
virtualenv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that has not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=939</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=939"/>
		<updated>2023-08-09T15:20:21Z</updated>

		<summary type="html">&lt;p&gt;Mozes: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; fosscuda:    GNU Compiler Collection (GCC) based compiler toolchain based on FOSS with CUDA support.&lt;br /&gt;
; gmvapich2:    GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support. '''DEPRECATED'''&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; goolfc:    GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK. '''DEPRECATED'''&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java/'&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://virtualenv.pypa.io/en/latest/user_guide.html virtualenv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that &amp;lt;code&amp;gt;virtualenv --help&amp;lt;/code&amp;gt; has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
virtualenv --system-site-packages test&lt;br /&gt;
# or you could use 'virtualenv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://virtualenv.pypa.io/en/stable/userguide/ virtualenv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
virtualenv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
virtualenv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that has not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.1/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=938</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=938"/>
		<updated>2023-08-02T16:38:15Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Common issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If your account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has excessive prompts for managing files.&lt;br /&gt;
: Filezilla opens one connection for browsing the system. Transferring files opens 1-4 additional connections when the transfers start. Once they finish, those connections disconnect. If you start additional transfers, new connections will be opened. Every one of those connections must be approved through Duo MFA on your smart device. You can adjust the number of connections that FileZilla opens for transfers if you like. &amp;lt;tt&amp;gt;File -&amp;gt; Site Manager -&amp;gt; (choose the site you're changing) -&amp;gt; Transfer Settings -&amp;gt; Limit number of simultaneous connections&amp;lt;/tt&amp;gt;.&lt;br /&gt;
: Another option is to disable processing the transfer queue, add the things to it you want to transfer and then re-enable the transfer queue. Then at least it will re-use the connections until the queue is empty.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes and /scratch || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /scratch || Shared || 3.1PB shared with /bulk and /homes || cephfs || Fast shared tmp space; files not used in 30 days are automatically culled&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /scratch, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands.&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=937</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=937"/>
		<updated>2023-08-02T16:37:51Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Common issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If your account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has excessive prompts for managing files.&lt;br /&gt;
: Filezilla opens one connection for browsing the system. Transferring files opens 1-4 additional connections when the transfers start. Once they finish, those connections disconnect. If you start additional transfers, new connections will be opened. Every one of those connections must be approved through Duo MFA on your smart device. You can adjust the number of connections that FileZilla opens for transfers if you like. &amp;lt;tt&amp;gt;File -&amp;gt; Site Manager -&amp;gt; (choose the site you're changing) -&amp;gt; Transfer Settings -&amp;gt; Limit number of simultaneous connections&amp;lt;/tt&amp;gt;.&lt;br /&gt;
: Another option is to disable processing the transfer queue, add the things to it you want to transfer and then re-enable the transfer queue. Then at least it will re-use the connections until the queue is empty.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes and /scratch || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /scratch || Shared || 3.1PB shared with /bulk and /homes || cephfs || Fast shared tmp space; files not used in 30 days are automatically culled&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /scratch, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands.&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Onedrive_Data_Transfer&amp;diff=936</id>
		<title>Onedrive Data Transfer</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Onedrive_Data_Transfer&amp;diff=936"/>
		<updated>2023-07-24T21:38:44Z</updated>

		<summary type="html">&lt;p&gt;Mozes: Remove the module info&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Using Rclone for File Transfer=&lt;br /&gt;
&lt;br /&gt;
Rclone is an open source file transfer tool to make transfering files to and from various cloud resources such as Box, Amazon S3, Microsoft OneDrive, and Google Cloud Storage and your local machine a simpler task. Guides on how to set up a variety of resources to transfer to and from can be found at [https://rclone.org/ rclone’s webpage].&lt;br /&gt;
&lt;br /&gt;
This tool can be used to transfer files between Beocat clusters and outside cloud providers, such as OneDrive. This tool may allow you to perform backups of critical data onto remote systems.&lt;br /&gt;
== Setup RClone ==&lt;br /&gt;
&amp;lt;ol start=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;You must be able to access your KSU Office365 account before beginning this process. Contact your local campus IT support if you need help with initial account setup.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&amp;lt;ol start=&amp;quot;2&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Open a browser on your local machine and navigate to the OnDemand portal for the cluster of your choice. We use Beocat for this example: https://ondemand.beocat.ksu.edu. Select Desktop under Interactive Apps in the menu at the top of the page to get a virtual desktop on the cluster.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:ood_rclone_desktop_selection.png|select desktop]]&lt;br /&gt;
&lt;br /&gt;
Scroll down to the bottom of the next page, and click on the blue Launch button. When the resource is ready, click on the blue Launch Desktop button that appears on the next page.&lt;br /&gt;
&lt;br /&gt;
[[File:ood_rclone_desktop_launch.png|launch desktop]]&lt;br /&gt;
&lt;br /&gt;
On the virtual desktop, click on the Terminal Emulator icon at the bottom of the window to open up a command shell.&lt;br /&gt;
&lt;br /&gt;
[[File:ood_rclone_terminal_launch.png|launch terminal]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ol start=&amp;quot;3&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;We will need to start the basic configuration for OneDrive. To do this run rclone config:&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Load the rclone config===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[mozes@gremlin01 ~]$ rclone config&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;ol start=&amp;quot;4&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;In a new configuration, you will see no remotes found. Enter n to make a new remote and name it a name you will know. In our example, we will use “myOneDrive”. Select Microsoft OneDrive by entering in the corresponding number, in our case 27. Hit Enter for the client_id, client_secret, and Edit advanced config. When you are prompted for auto config, select n. The terminal will stop at a result&amp;gt; prompt. Proceed to the next step.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===Configure OneDrive===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[mozes@gremlin01 ~]$ rclone config&lt;br /&gt;
No remotes found - make a new one&lt;br /&gt;
n) New remote&lt;br /&gt;
s) Set configuration password&lt;br /&gt;
q) Quit config&lt;br /&gt;
n/s/q&amp;gt; n&lt;br /&gt;
name&amp;gt; myOneDrive&lt;br /&gt;
Type of storage to configure.&lt;br /&gt;
Enter a string value. Press Enter for the default (&amp;quot;&amp;quot;).&lt;br /&gt;
Choose a number from below, or type in your own value&lt;br /&gt;
27 / Microsoft OneDrive&lt;br /&gt;
   \ &amp;quot;onedrive&amp;quot;&lt;br /&gt;
 Storage&amp;gt; 27&lt;br /&gt;
** See help for onedrive backend at: https://rclone.org/onedrive/ **&lt;br /&gt;
 Microsoft App Client Id&lt;br /&gt;
Leave blank normally.&lt;br /&gt;
Enter a string value. Press Enter for the default (&amp;quot;&amp;quot;).&lt;br /&gt;
client_id&amp;gt; &lt;br /&gt;
Microsoft App Client Secret&lt;br /&gt;
Leave blank normally.&lt;br /&gt;
Enter a string value. Press Enter for the default (&amp;quot;&amp;quot;).&lt;br /&gt;
client_secret&amp;gt; &lt;br /&gt;
Option region&lt;br /&gt;
Choose national cloud region for OneDrive.&lt;br /&gt;
Enter a string value. Press Enter for the default (&amp;quot;global&amp;quot;).&lt;br /&gt;
Choose a number from below, or type in your own value.&lt;br /&gt;
region&amp;gt; 1&lt;br /&gt;
Edit advanced config? (y/n)&lt;br /&gt;
y) Yes&lt;br /&gt;
n) No (default)&lt;br /&gt;
y/n&amp;gt; &lt;br /&gt;
Remote config&lt;br /&gt;
Use auto config?&lt;br /&gt;
 * Say Y if not sure&lt;br /&gt;
 * Say N if you are working on a remote or headless machine&lt;br /&gt;
y) Yes (default)&lt;br /&gt;
n) No&lt;br /&gt;
y/n&amp;gt; n&lt;br /&gt;
For this to work, you will need rclone available on a machine that has a web browser available.&lt;br /&gt;
Execute the following on your machine (same rclone version recommended) :&lt;br /&gt;
    rclone authorize &amp;quot;onedrive&amp;quot;&lt;br /&gt;
Then paste the result below:&lt;br /&gt;
result&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;ol start=&amp;quot;5&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Now open up another terminal window on the virtual desktop by clicking again on the Terminal Emulator icon at the bottom of the window.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
In the new shell that opens, run rclone authorize &amp;quot;onedrive&amp;quot; at the command prompt. You will be prompted to go to a 127.0.0.1 address in a web browser.&lt;br /&gt;
===Authorize OneDrive from the terminal===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[mozes@gremlin01 ~]$ rclone authorize &amp;quot;onedrive&amp;quot;&lt;br /&gt;
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth&lt;br /&gt;
Log in and authorize Rclone for access&lt;br /&gt;
Waiting for code...&lt;br /&gt;
Got code&lt;br /&gt;
Paste the following into your remote machine ---&amp;gt;&lt;br /&gt;
{&amp;quot;access_token&amp;quot;:&amp;quot;XXXX&amp;quot;,&amp;quot;token_type&amp;quot;:&amp;quot;bearer&amp;quot;,&amp;quot;refresh_token&amp;quot;:&amp;quot;XXXX&amp;quot;,&amp;quot;expiry&amp;quot;:&amp;quot;XXXX&amp;quot;}&lt;br /&gt;
&amp;lt;---End paste&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Right-click on the link and select open link from the menu. A browser window will open on the virtual desktop. On the Microsoft Office sign-in page that opens enter in your KSU e-mail address and click “Next”. You will be taken to your eID single sign-on page where you can sign in using your KSU eID credentials. If login is successful, you should be redirected to a page that says “Success!”&lt;br /&gt;
&lt;br /&gt;
Return to the terminal window where you ran the authorize command. You should see a message instructing you to paste a line of code into your “remote machine” which is the first terminal you opened to run rclone config.&lt;br /&gt;
&lt;br /&gt;
[[File:ood_rclone_token.png|rclone token]]&lt;br /&gt;
&lt;br /&gt;
Copy the text by highlighting it, right-clicking, and selecting copy. Then, return to the first terminal with the waiting result prompt and paste the text by right-clicking and selecting paste, then press enter. Next, select option 1 for OneDrive Personal or Business and then 1 for OneDrive (business). Press y at the next two prompts to confirm and q to exit.&lt;br /&gt;
&lt;br /&gt;
===Authorize OnDemand on cluster===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
result&amp;gt; {&amp;quot;access_token&amp;quot;:&amp;quot;XXXX&amp;quot;,&amp;quot;token_type&amp;quot;:&amp;quot;bearer&amp;quot;,&amp;quot;refresh_token&amp;quot;:&amp;quot;XXXX&amp;quot;,&amp;quot;expiry&amp;quot;:&amp;quot;XXXX&amp;quot;}&lt;br /&gt;
Choose a number from below, or type in an existing value&lt;br /&gt;
 1 / OneDrive Personal or Business&lt;br /&gt;
   \ &amp;quot;onedrive&amp;quot;&lt;br /&gt;
 2 / Root Sharepoint site&lt;br /&gt;
   \ &amp;quot;sharepoint&amp;quot;&lt;br /&gt;
 3 / Type in driveID&lt;br /&gt;
   \ &amp;quot;driveid&amp;quot;&lt;br /&gt;
 4 / Type in SiteID&lt;br /&gt;
   \ &amp;quot;siteid&amp;quot;&lt;br /&gt;
 5 / Search a Sharepoint site&lt;br /&gt;
   \ &amp;quot;search&amp;quot;&lt;br /&gt;
Your choice&amp;gt; 1&lt;br /&gt;
Found 1 drives, please select the one you want to use:&lt;br /&gt;
1: OneDrive (business) id=b!laCd4ZJ54U-[...]&lt;br /&gt;
Chose drive to use:&amp;gt; 1&lt;br /&gt;
Found drive 'root' of type 'business', URL: https://ksuemailprod-my.sharepoint.com/personal/mozes_ksu_edu/Documents&lt;br /&gt;
Is that okay?&lt;br /&gt;
y) Yes (default)&lt;br /&gt;
n) No&lt;br /&gt;
y/n&amp;gt; y&lt;br /&gt;
--------------------&lt;br /&gt;
[myOneDrive]&lt;br /&gt;
type = onedrive&lt;br /&gt;
token = {&amp;quot;access_token&amp;quot;: ...}&lt;br /&gt;
drive_id = b!laCd4ZJ54U-[...]&lt;br /&gt;
drive_type = business&lt;br /&gt;
--------------------&lt;br /&gt;
y) Yes this is OK (default)&lt;br /&gt;
e) Edit this remote&lt;br /&gt;
d) Delete this remote&lt;br /&gt;
y/e/d&amp;gt; y&lt;br /&gt;
Current remotes:&lt;br /&gt;
Name                 Type&lt;br /&gt;
====                 ====&lt;br /&gt;
myOneDrive           onedrive&lt;br /&gt;
e) Edit existing remote&lt;br /&gt;
n) New remote&lt;br /&gt;
d) Delete remote&lt;br /&gt;
r) Rename remote&lt;br /&gt;
c) Copy remote&lt;br /&gt;
s) Set configuration password&lt;br /&gt;
q) Quit config&lt;br /&gt;
e/n/d/r/c/s/q&amp;gt; q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now test the connection by running the ls command. You should see a listing of your OneDrive files.&lt;br /&gt;
&lt;br /&gt;
=== List contents of OneDrive ===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[mozes@gremlin01 ~]$ rclone ls myOneDrive:/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;ol start=&amp;quot;6&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;To upload or download files, use the rclone copy command. For example:&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
=== Transferring files ===&lt;br /&gt;
==== OnDemand ====&lt;br /&gt;
Once you have defined an endpoint, you can use that endpoint in the Files manager in OpenOnDemand.&lt;br /&gt;
&lt;br /&gt;
First you'd select the endpoint and files/folder you would like to copy or move. Then you would click the copy/move button near the top.&lt;br /&gt;
&lt;br /&gt;
[[File:ood_remote_copy_selection.png|rclone copy file selection]]&lt;br /&gt;
&lt;br /&gt;
The you would traverse to the location where you would like to copy/move the files/folders to and then you would click the copy or move button on the left.&lt;br /&gt;
&lt;br /&gt;
[[File:ood_remote_copy_action.png|rclone remote copy action]]&lt;br /&gt;
&lt;br /&gt;
This would trigger the file transfer to start. Once it was done, you would see the files available in the path that you requested them.&lt;br /&gt;
&lt;br /&gt;
[[File:ood_remote_copy_finished.png|rclone copy finished]]&lt;br /&gt;
==== Command Line ====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[mozes@gremlin01 ~]$ rclone copy myOneDrive:/SomeFile.txt ./&lt;br /&gt;
[mozes@gremlin01 ~]$ rclone copy ./SomeFile.txt myOneDrive:/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;ol start=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;To download directories, use the rclone copy command and use directory names over file. This copies the contents of the folders, so you need to specify a destination folder.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===== Download a directory from OneDrive =====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[mozes@gremlin01 ~]$ rclone copy myOneDrive:/my_beocat_dir ./my_beocat_dir&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To upload a directory named my_beocat_dir to OneDrive, use rclone copy.&lt;br /&gt;
&lt;br /&gt;
===== Upload a directory to OneDrive =====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[mozes@gremlin01 ~]$ rclone copy ./my_beocat_dir myOneDrive:/my_beocat_dir&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;ol start=&amp;quot;8&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Rclone also supports using sync to transfer files, similar to rsync. The syntax is similar to rclone copy. This would only transfer files that are updated by name, checksum, or time. The example below would sync the files of the local directory to the remote directory on OneDrive.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
===== Sync changed files =====&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[mozes@gremlin01 ~]$ rclone sync ./my_beocat_dir myOneDrive:/my_beocat_dir&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Potential issues ==&lt;br /&gt;
If you've setup an rclone remote previously and it no longer works, it may be that rclone's connection to your cloud storage has expired.&lt;br /&gt;
&lt;br /&gt;
You may need to follow the procedures to get a desktop ondemand session above, then reauthorize your connection:&lt;br /&gt;
 rclone config reconnect $whatever_you_named_your_endpoint:&lt;br /&gt;
'''note the trailing :'''&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OpenOnDemand&amp;diff=935</id>
		<title>OpenOnDemand</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OpenOnDemand&amp;diff=935"/>
		<updated>2023-07-24T16:17:05Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* With virtualenv */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== OpenOnDemand ==&lt;br /&gt;
OpenOnDemand is a platform for running computational tasks on a cluster from a web browser. If those&lt;br /&gt;
tasks are interactive, it provides the ability to interact with them once the task has started its execution.&lt;br /&gt;
OpenOnDemand has an &amp;quot;App&amp;quot; based plugin system for adding new types of computational tasks and&lt;br /&gt;
interactivity.&lt;br /&gt;
&lt;br /&gt;
One of the greatest benefits of this system is remote access to large machines for computational tasks, without the need for utilizing a commnand-line interface that is difficult to learn.&lt;br /&gt;
&lt;br /&gt;
Our installation is available at https://ondemand.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
=== File Management ===&lt;br /&gt;
File management can be accessed through the Files dropdown in the dashboard.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Files Dropdown.png|Files Dropdown]]&lt;br /&gt;
&lt;br /&gt;
Once you click on Home Directory, you can manage your files, upload/download/rename/edit and view.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Files Launch.png|The OpenOnDemand File management application]]&lt;br /&gt;
&lt;br /&gt;
If you're looking for a way to get your files into and out of OneDrive, Google Drive, other cloud providers you may be interested in taking a look at our documentation for [[Onedrive Data Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Job Management ===&lt;br /&gt;
A cluster isn't much of a cluster if it can't run jobs for you to lookup later. OpenOnDemand has a robust job management application builtin.&lt;br /&gt;
&lt;br /&gt;
It is accessible from the Jobs dropdown in the dashboard.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS DROPDOWN ACTIVE.png|Screenshot of the Jobs dropdown in the openondemand dashboard]]&lt;br /&gt;
==== View Active Jobs ====&lt;br /&gt;
You can view your active jobs and get more information about them from the Active Jobs option in the Jobs dropdown&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS ACTIVE.png|Active jobs app in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
==== Compose Jobs ====&lt;br /&gt;
You can create new jobs through the &amp;quot;Job Composer&amp;quot; in the jobs dropdown.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS COMPOSER NEW.png|Screenshot showing the ability to create new jobs within the ood job composer]]&lt;br /&gt;
&lt;br /&gt;
If we create a new job from a template, you're given a list of templates to use:&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS COMPOSER NEW FROM TEMPLATE.png|Screenshot of some example templates for jobs within openondemand]]&lt;br /&gt;
&lt;br /&gt;
If we choose the default template, you can run it, or edit the job script to make it do what you would like&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JOBS COMPOSER SUBMIT.png|Screenshot showing the ability to submit or edit jobs within the Job composer in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
=== Interactive Applications ===&lt;br /&gt;
We have a number of interactive applications being made available through OpenOnDemand&lt;br /&gt;
* Beocat Desktop&lt;br /&gt;
* [https://www.comsol.com/ COMSOL]&lt;br /&gt;
* [https://www.gnu.org/software/octave/ Octave]&lt;br /&gt;
* [https://www.ks.uiuc.edu/Research/vmd/ VMD]&lt;br /&gt;
* [https://www.wolfram.com/mathematica/ Mathematica] Please note, this is from a site license limited to KSU students, faculty and staff.&lt;br /&gt;
* [https://afni.nimh.nih.gov/ AFNI]&lt;br /&gt;
* [https://coder.com/ CodeServer] is a cloud native version of VS Code that runs on the compute nodes. Useful, since VS Code's remote connections cannot be used with Beocat. Other names for VS Code may be VSCode or Visual Studio Code.&lt;br /&gt;
* [https://jupyter.org/ Jupyter]&lt;br /&gt;
* [https://www.rstudio.com/products/rstudio-server/ RStudio]&lt;br /&gt;
==== RStudio ====&lt;br /&gt;
RStudio is one of the interactive applications that we've enabled for use within OpenOnDemand&lt;br /&gt;
&lt;br /&gt;
You launch interactive apps through the &amp;quot;Interactive Apps&amp;quot; dropdown.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Interactive Apps Dropdown.png|Interactive apps dropdown in the dashboard]]&lt;br /&gt;
&lt;br /&gt;
Once you click on RStudio, you'll be brought to a page allowing you to specify requirements for your RStudio run, e.g. memory, cores, and runtime.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Interactive RStudio Launch.png|Screenshot showing the options for submitting an RStudio job in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Once the job is submitted, the scheduler will take it and run it when space is available. Once the job is running, &amp;quot;My Interactive sessions&amp;quot; page will look like this:&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Interactive RStudio Connection.png|Screenshot showing the ability to connect to an RStudio job in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
From there, you can connect to RStudio and it will bring you to a familiar interface.&lt;br /&gt;
&lt;br /&gt;
[[File:Ood RStudio.png|Screenshot showing RStudio through OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
==== Jupyter ====&lt;br /&gt;
Like RStudio above, click on Interactive Apps and then go to Jupyter. From there, you'll have a form that allows you to specify requirements for your Jupyter run.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JUPYTER LAUNCH.png|Screenshot of options to launch Jupyter]]&lt;br /&gt;
&lt;br /&gt;
Once the job is launched it will take you to a page where you can connect to your running Jupyter service&lt;br /&gt;
&lt;br /&gt;
[[File:Ood INTERACTIVE APPS JUPYTER.png|Screenshot of connection option for jupyter]]&lt;br /&gt;
&lt;br /&gt;
It will then take you to the interface you chose, below is the JupyterLab interface:&lt;br /&gt;
&lt;br /&gt;
[[File:OOD JUPYTER LAB.png|Screenshot of JupyterLab through OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Jupyter Kernels currently supported:&lt;br /&gt;
* Python 2&lt;br /&gt;
* Python 3&lt;br /&gt;
* R&lt;br /&gt;
* Octave&lt;br /&gt;
* Sage&lt;br /&gt;
&lt;br /&gt;
Julia support will come, but will need each user to set it up individually. There is currently a large bug centered around Julia, CentOS/RHEL, and our shared filesystem (Ceph).&lt;br /&gt;
&lt;br /&gt;
===== Extra Python libraries =====&lt;br /&gt;
====== Without extra setup ======&lt;br /&gt;
You may need to install extra python libraries to use with your Jupyter Python kernels. For instance, this is how you'd install tobler&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
!pip install --user tobler&lt;br /&gt;
&lt;br /&gt;
# Sometimes Jupyter notebook needs to then be told how to find the libraries you've installed in that manner.&lt;br /&gt;
# Your username should be put in place of the &amp;lt;PUT_YOUR_USERNAME_HERE&amp;gt; text.&lt;br /&gt;
# this will need to change if you are not using a 3.7 kernel&lt;br /&gt;
import sys&lt;br /&gt;
sys.path.append(&amp;quot;/homes/&amp;lt;PUT_YOUR_USERNAME_HERE&amp;gt;/.local/lib/python3.7/site-packages&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====== With virtualenv ======&lt;br /&gt;
Sometimes it is useful to have separation between your various projects, for instance being able to use multiple versions of a python library in different projects.&lt;br /&gt;
&lt;br /&gt;
You can setup a virtual environment (or many) for use with our Jupyter environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# First we'll activate the ability to use the ondemand modules:&lt;br /&gt;
module use /opt/beocat/ondemand_modules&lt;br /&gt;
&lt;br /&gt;
# Then we can list the jupyter_python modules (so we can use them to create the virtualenv&lt;br /&gt;
module list jupyter_python&lt;br /&gt;
&lt;br /&gt;
# Load the version you would like (ideally you use a jupyter_python module for this, otherwise the virtualenv itself will have to have a decent amount of jupyter libraries installed into it&lt;br /&gt;
module load jupyter_python/3.8.6-TensorFlow-2.4.1&lt;br /&gt;
&lt;br /&gt;
# If you'd like to see what libraries this actually loaded, you can check it with the following:&lt;br /&gt;
module list&lt;br /&gt;
&lt;br /&gt;
# We'll create a virtual environment (activate it and install any libraries you need.&lt;br /&gt;
virtualenv --system-site-packages /homes/mozes/virtualenvs/testing_ondemand_jupyter&lt;br /&gt;
. /homes/mozes/virtualenvs/testing_ondemand_jupyter/bin/activate&lt;br /&gt;
pip install # insert needed libraries here&lt;br /&gt;
&lt;br /&gt;
# here we create a directory to hold the configuration files for telling our Jupyter environment about your virtual environment&lt;br /&gt;
mkdir -p ~/ondemand/jupyter_kernel_configs&lt;br /&gt;
&lt;br /&gt;
# Now we need to create a configuration file to instruct Jupyter to find this virtual environment&lt;br /&gt;
nano ~/ondemand/jupyter_kernel_configs/my_environment_name.sh&lt;br /&gt;
&lt;br /&gt;
# in that file should be lines like the following:&lt;br /&gt;
NAME=&amp;quot;testing_ondemand_virtualenv&amp;quot;&lt;br /&gt;
VIRTUAL_ENV=&amp;quot;/homes/mozes/virtualenvs/testing_ondemand_jupyter&amp;quot;&lt;br /&gt;
MODULES=&amp;quot;jupyter_python/3.8.6-TensorFlow-2.4.1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
# of course, you should provide your own name and path to the virtual environment. Please don't put spaces in the name.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Once you start a new jupyter session it  should list the new kernel option that uses your virtualenv&lt;br /&gt;
&lt;br /&gt;
====== With conda ======&lt;br /&gt;
Conda environments should automatically show up if ''conda'' is in the PATH.&lt;br /&gt;
&lt;br /&gt;
==== Beocat Desktop ====&lt;br /&gt;
Sometimes, you just need a Desktop somewhere to run your graphical applications. This can be done through the Beocat Desktop option in the Interactive Apps dropdown on the dashboard.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD DESKTOP LAUNCH.png|Screenshot of the options to launch a graphical desktop through OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Once launched, you'll be able to connect to the desktop through vnc in the openondemand in the &amp;quot;My Interactive Sessions&amp;quot; tab.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD INTERACTIVE APPS DESKTOP.png|Screenshot of VNC options for Desktop in OpenOnDemand]]&lt;br /&gt;
&lt;br /&gt;
Once you've launched the Beocat Desktop, you can interact with it like a normal desktop through the browser.&lt;br /&gt;
&lt;br /&gt;
[[File:OOD DESKTOP VNC.png|Screenshot of VNC Beocat Desktop]]&lt;br /&gt;
&lt;br /&gt;
=== Shell Access ===&lt;br /&gt;
Somethings, no matter how hard we try, are easier to do via the command line. OpenOnDemand also gives you a way to handle those cases via the Clusters dropdown&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Clusters Dropdown.png|A screenshot showing the clusters dropdown from the OpenOnDemand dashboard]]&lt;br /&gt;
&lt;br /&gt;
You can choose an individual headnode, if need-be, or you can choose &amp;quot;Beocat Shell Access&amp;quot; to be given one of the headnodes at random. Once chosen, you should be able to have a familiar command-line experience&lt;br /&gt;
&lt;br /&gt;
[[File:Ood Clusters Launch.png|A Screenshot showing shell access through OpenOnDemand]]&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=934</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=934"/>
		<updated>2023-07-12T14:21:58Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* How do I connect to Beocat */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If your account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes and /scratch || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /scratch || Shared || 3.1PB shared with /bulk and /homes || cephfs || Fast shared tmp space; files not used in 30 days are automatically culled&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /scratch, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands.&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=933</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=933"/>
		<updated>2023-06-15T01:18:54Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* How Do I Use Beocat? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of CentOS Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
==== Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] ====&lt;br /&gt;
==== Read about  [[Installed software]] and languages ====&lt;br /&gt;
==== Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] ====&lt;br /&gt;
==== Run Interactive Jobs! [[OpenOnDemand]] ====&lt;br /&gt;
==== [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] ====&lt;br /&gt;
&lt;br /&gt;
==== Big Data course on Beocat! [[BigDataOnBeocat]] ====&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [https://libera.chat/guides/connect Libera chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' in your message, and it should grab our attention. [https://kiwiirc.com/nextclient/irc.libera.chat/#beocat Available from a web browser here.]&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H4&amp;gt;&lt;br /&gt;
Again, when you email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu] please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/H4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
{{#widget:Twitter timeline|id=KSUBeocat|count=2}}&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=932</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=932"/>
		<updated>2023-06-15T00:29:25Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* CUDA */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. 'kstat -g' will show you the GPU nodes and the jobs running on them.  To request a GPU node, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; for example to request 1 GPU for your job; if your job uses multiple nodes, the number of GPUs requested is per-node.  You can also request a given type of GPU (kstat -g -l to show types) by using &amp;lt;tt&amp;gt;--gres=gpu:geforce_gtx_1080_ti:1&amp;lt;/tt&amp;gt; for a 1080Ti GPU on the Wizards or Dwarves, &amp;lt;tt&amp;gt;--gres=gpu:quadro_gp100:1&amp;lt;/tt&amp;gt; for the P100 GPUs on Wizard20-21 that are best for 64-bit codes like Vasp.  Most of these GPU nodes are owned by various groups.  If you want access to GPU nodes and your group does not own any, we can add you to the &amp;lt;tt&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/tt&amp;gt; group that has priority on Dwarf36-39.  For more information on compiling CUDA code click on this [[CUDA]] link.&lt;br /&gt;
&lt;br /&gt;
A listing of the current types of gpus can be gathered with this command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
scontrol show nodes | grep CfgTRES | tr ',' '\n' | awk -F '[:=]' '/gres\/gpu:/ { print $2 }' | sort -u&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
At the time of this writing, that command produces this list:&lt;br /&gt;
* geforce_gtx_1080_ti&lt;br /&gt;
* geforce_rtx_2080_ti&lt;br /&gt;
* geforce_rtx_3090&lt;br /&gt;
* quadro_gp100&lt;br /&gt;
* rtx_a4000&lt;br /&gt;
&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
''Intra''node jobs run on many cores in the same node. These jobs can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or any programming language that has the concept of ''threads''. Often, your program will need to know how many cores you want it to use, and many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--nodes=1 --cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
''Inter''node jobs can utilize many cores on one or more nodes. Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--nodes=''n'' --ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=10 --ntasks=100&amp;lt;/tt&amp;gt; will give you a total of 100 cores across 10 nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=avx2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## There is one strict rule for guaranteeing Slurm reads all of your options:&lt;br /&gt;
## Do not put *any* lines above your resource requests that aren't either:&lt;br /&gt;
##    1) blank. (no other characters)&lt;br /&gt;
##    2) comments (lines must begin with '#')&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres=gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocat@cs.ksu.edu to see how we can assist in&lt;br /&gt;
## getting your job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance.  Those needing more storage may purchase /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up.  The /scratch filesystem provides a temporary space to store intermediary files that are needed for multiple jobs, or for files that are larger than your home directory. The /fastscratch file system is a zfs host with lots of NVME drives provide much faster&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to /fastscratch, the local disk on each node, or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in a purchased /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Due to the cost, directories will be provided when we are contacted and provided with payment information.&lt;br /&gt;
&lt;br /&gt;
===Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /scratch file system may be faster than /bulk or /homes since each file written will access fewer disks.&lt;br /&gt;
In order to use scratch, you first need to make a directory for yourself.  &lt;br /&gt;
Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the scratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /scratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Fast Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /fastscratch file system is faster than /bulk or /homes or /scratch.&lt;br /&gt;
In order to use fastscratch, you first need to make a directory for yourself.  &lt;br /&gt;
Fast Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the fastscratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /fastscratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  When we get the scratch file system working with Lustre, it may&lt;br /&gt;
end up being faster than accessing local disk so we will post the access rates for each.  Use 'kstat -l -h'&lt;br /&gt;
to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&lt;br /&gt;
# Once the move is complete, the Supervisor should limit the permissions for the directory again by removing the student's access:&lt;br /&gt;
chown $USER: -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
In the past, Beocat users have been allowed to keep their&lt;br /&gt;
/homes and /bulk directories open so that any other user could&lt;br /&gt;
access files.  In order to bring Beocat into alignment with&lt;br /&gt;
State of Kansas regulations and industry norms, all users must now have their /homes /bulk /scratch and /fastscratch directories&lt;br /&gt;
locked down from other users, but can still share files and directories within their group or with individual users&lt;br /&gt;
using group and individual ACLs (Access Control Lists) which will be explained below.&lt;br /&gt;
Beocat staff will be exempted from this&lt;br /&gt;
policy as we need to work freely with all users and will manage our&lt;br /&gt;
subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory with the setacls script===&lt;br /&gt;
&lt;br /&gt;
If you do not wish to share files or directories with other users, you do not need to do anything&lt;br /&gt;
as rwx access to others has already been removed.&lt;br /&gt;
If you want to share files or directories you can either use the **setacls** script or configure&lt;br /&gt;
the ACLs (Access Control Lists) manually.&lt;br /&gt;
&lt;br /&gt;
The '''setacls -h''' will show how to use the script.&lt;br /&gt;
  &lt;br /&gt;
  Eos: setacls -h&lt;br /&gt;
  setacls [-r] [-w] [-g group] [-u user] -d /full/path/to/directory&lt;br /&gt;
  Execute pemission will always be applied, you may also choose r or w&lt;br /&gt;
  Must specify at least one group or user&lt;br /&gt;
  Must specify at least one directory, and it must be the full path&lt;br /&gt;
  Example: setacls -r -g ksu-cis-hpc -u mozes -d /homes/daveturner/shared_dir&lt;br /&gt;
&lt;br /&gt;
You can specify the permissions to be either -r for read or -w for write or you can specify both.&lt;br /&gt;
You can provide a priority group to share with, which is the same as the group used in a --partition=&lt;br /&gt;
statement in a job submission script.  You can also specify users.&lt;br /&gt;
You can specify a file or a directory to share.  If the directory is specified then all files in that&lt;br /&gt;
directory will also be shared, and all files created in the directory laster will also be shared.&lt;br /&gt;
&lt;br /&gt;
The script will set everything up for you, telling you the commands it is executing along the way,&lt;br /&gt;
then show the resulting ACLs at the end with the '''getfacl''' command.&lt;br /&gt;
&lt;br /&gt;
====Manually configuring your ACLs====&lt;br /&gt;
&lt;br /&gt;
If you want to manually configure the ACLs you can use the directions below to do what the **setacls** &lt;br /&gt;
script would do for you.&lt;br /&gt;
You first need to provide the minimum execute access to your /homes&lt;br /&gt;
or /bulk directory before sharing individual subdirectories.  Setting the ACL to execute only will allow those &lt;br /&gt;
in your group to get access to subdirectories while not including read access will mean they will not&lt;br /&gt;
be able to see other files or subdirectories on your main directory, but do keep in mind that they can still access them&lt;br /&gt;
so you may want to still lock them down manually.  Below is an example of how I would change my&lt;br /&gt;
/homes/daveturner directory to allow ksu-cis-hpc group execute access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:X /homes/daveturner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your research group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share_hpc', &lt;br /&gt;
then providing access to my ksu-cis.hpc group&lt;br /&gt;
(my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q).&lt;br /&gt;
Using -R will make these changes recursively to all files and directories in that subdirectory while changing the defaults with the setfacl -d command will ensure that files and directories created&lt;br /&gt;
later will be done so with these same ACLs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc&lt;br /&gt;
# ACLs are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share_hpc' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:ksu-cis-hpc:rwX -R share_hpc'&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
groups&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself&lt;br /&gt;
by emailing us at&lt;br /&gt;
beocat@cs.ksu.edu&lt;br /&gt;
.&lt;br /&gt;
If you want to share a directory with only a few people you can manage your ACLs using individual usernames&lt;br /&gt;
instead of with a group.&lt;br /&gt;
&lt;br /&gt;
You can use the '''getfacl''' command to see groups have access to a given directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
getfacl share_hpc&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ACLs give you great flexibility in controlling file access at the&lt;br /&gt;
group level.  Below is a more advanced example where I set up a directory to be shared with&lt;br /&gt;
my ksu-cis-hpc group, Dan's ksu-cis-dan group, and an individual user 'mozes' who I also want&lt;br /&gt;
to have write access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc_dan_mozes&lt;br /&gt;
# acls are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
getfacl share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc_dan_mozes&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  user:mozes:rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  group:ksu-cis-dan:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:user:mozes:rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:group:ksu-cis-dan:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::--x&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
cd&lt;br /&gt;
mkdir public_html&lt;br /&gt;
# Opt-in to letting the webserver access your home directory:&lt;br /&gt;
setfacl -m g:public_html:x ~/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
We have a page here dedicated to [[Globus]]&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --array specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint/Restart using DMTCP ==&lt;br /&gt;
&lt;br /&gt;
DMTCP is Distributed Multi-Threaded CheckPoint software that will checkpoint your application without modification, and&lt;br /&gt;
can be set up to automatically restart your job from the last checkpoint if for example the node you are running on fails.  &lt;br /&gt;
This has been tested successfully&lt;br /&gt;
on Beocat for some scalar and OpenMP codes, but has failed on all MPI tests so far.  We would like to encourage users to&lt;br /&gt;
try DMTCP out if their non-MPI jobs run longer than 24 hours.  If you want to try this, please contact us first since we are still&lt;br /&gt;
experimenting with DMTCP.&lt;br /&gt;
&lt;br /&gt;
The sample job submission script below shows how dmtcp_launch is used to start the application, then dmtcp_restart is used to start from a checkpoint if the job has failed and been rescheduled.&lt;br /&gt;
If you are putting this in an array script, then add the Slurm array task ID to the end of the ckeckpoint directory name&lt;br /&gt;
like &amp;lt;B&amp;gt;ckptdir=ckpt-$SLURM_ARRAY_TASK_ID&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --mem=50G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS/2016.4-foss-2017beocatb-hybrid&lt;br /&gt;
  module load DMTCP&lt;br /&gt;
  module list&lt;br /&gt;
  &lt;br /&gt;
  ckptdir=ckpt&lt;br /&gt;
  mkdir -p $ckptdir&lt;br /&gt;
  export DMTCP_CHECKPOINT_DIR=$ckptdir&lt;br /&gt;
  &lt;br /&gt;
  if ! ls -1 $ckptdir | grep -c dmtcp_restart_script &amp;gt; /dev/null&lt;br /&gt;
  then&lt;br /&gt;
     echo &amp;quot;Using dmtcp_launch to start the app the first time&amp;quot;&lt;br /&gt;
     dmtcp_launch --no-coordinator mpirun -np 1 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -nsteps 50000 -ntomp 4 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  else&lt;br /&gt;
     echo &amp;quot;Using dmtcp_restart from $ckptdir to continue from a checkpoint&amp;quot;&lt;br /&gt;
     dmtcp_restart $ckptdir/*.dmtcp&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
You will need to run several tests to verify that DMTCP is working properly with your application.&lt;br /&gt;
First, run a short test without DMTCP and another with DMTCP with the checkpoint interval set to 5 minutes&lt;br /&gt;
by adding the line &amp;lt;B&amp;gt;export DMTCP_CHECKPOINT_INTERVAL=300&amp;lt;/B&amp;gt; to your script.  Then use &amp;lt;B&amp;gt;kstat -d 1&amp;lt;/B&amp;gt; to&lt;br /&gt;
check that the memory in both runs is close to the same.  Also use this information to calculate the time &lt;br /&gt;
that each checkpoint takes.  In most cases I've seen times less than a minute for checkpointing that will normally&lt;br /&gt;
be done once each hour.  If your application is taking more time, let us know.  Sometimes this can be sped up&lt;br /&gt;
by simply turning off compression by adding the line &amp;lt;B&amp;gt;export DMTCP_GZIP=0&amp;lt;/B&amp;gt;.  Make sure to remove the&lt;br /&gt;
line where you set the checkpoint interval to 300 seconds so that the default time of once per hour will be used.&lt;br /&gt;
&lt;br /&gt;
After verifying that your code completes using DMTCP and does not take significantly more time or memory, you&lt;br /&gt;
will need to start a run then &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; it after the first checkpoint, then resubmit the same script to make &lt;br /&gt;
sure that it restarts and runs to completion.  If you are working with an array job script, the last is to try a few&lt;br /&gt;
array tasks at once to make sure there is no conflict between the jobs.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;--gres=killable:1&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;scancel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, the group gets access to a &amp;quot;partition&amp;quot; giving you priority on those nodes.&lt;br /&gt;
&lt;br /&gt;
In most situations, the scheduler will automatically add those priority partitions to the jobs as submitted. You should not need to include a partition list in your job submission.&lt;br /&gt;
&lt;br /&gt;
There are currently just a few exceptions that we will not automatically add:&lt;br /&gt;
* ksu-chem-mri.q&lt;br /&gt;
* ksu-gen-gpu.q&lt;br /&gt;
* ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
If you have access to those any of the non-automatic partitions, and have need of the resources in that partition, you can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
Otherwise, you shouldn't modify the partition line at all unless you really know what you're doing.&lt;br /&gt;
&lt;br /&gt;
== Graphical Applications ==&lt;br /&gt;
Some applications are graphical and need to have some graphical input/output. We currently accomplish this with X11 forwarding or [[OpenOnDemand]]&lt;br /&gt;
=== OpenOnDemand ===&lt;br /&gt;
[[OpenOnDemand]] is likely the easier and more performant way to run a graphical application on the cluster.&lt;br /&gt;
# visit [https://ondemand.beocat.ksu.edu/ ondemand] and login with your cluster credentials.&lt;br /&gt;
# Check the &amp;quot;Interactive Apps&amp;quot; dropdown. We may have a workflow ready for you. If not choose the desktop.&lt;br /&gt;
# Select the resources you need&lt;br /&gt;
# Select launch&lt;br /&gt;
# A job is now submitted to the cluster and once the job is started you'll see a Connect button&lt;br /&gt;
# use the app as needed. If using the desktop, start your graphical application.&lt;br /&gt;
&lt;br /&gt;
=== X11 Forwarding ===&lt;br /&gt;
==== Connecting with an X11 client ====&lt;br /&gt;
===== Windows =====&lt;br /&gt;
If you are running Windows, we recommend MobaXTerm as your file/ssh manager, this is because it is one relatively simple tool to do everything. MobaXTerm also automatically connects with X11 forwarding enabled.&lt;br /&gt;
===== Linux/OSX =====&lt;br /&gt;
Both Linux and OSX can connect in an X11 forwarding mode. Linux will have all of the tools you need installed already, OSX will need [https://www.xquartz.org/ XQuartz] installed.&lt;br /&gt;
&lt;br /&gt;
Then you will need to change your 'ssh' command slightly:&lt;br /&gt;
&lt;br /&gt;
 ssh -Y eid@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
The '''-Y''' argument tells ssh to setup X11 forwarding.&lt;br /&gt;
==== Starting an Graphical job ====&lt;br /&gt;
All graphical jobs, by design, must be interactive, so we'll use the srun command. On a headnode, we run the following:&lt;br /&gt;
 # load an X11 enabled application&lt;br /&gt;
 module load Octave&lt;br /&gt;
 # start an X11 job, sbatch arguments are accepted for srun as well, 1 node, 1 hour, 1 gb of memory&lt;br /&gt;
 srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 octave --gui&lt;br /&gt;
&lt;br /&gt;
Because these jobs are interactive, they may not be able to run at all times, depending on how busy the scheduler is at any point in time. '''--pty --x11''' are required arguments setting up the job, and '''octave --gui''' is the command to run inside the job.&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=931</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=931"/>
		<updated>2023-06-07T00:31:18Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Common issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If you're account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes and /scratch || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /scratch || Shared || 3.1PB shared with /bulk and /homes || cephfs || Fast shared tmp space; files not used in 30 days are automatically culled&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /scratch, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands.&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=930</id>
		<title>LinuxBasics</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=930"/>
		<updated>2023-06-05T21:03:57Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Example 2 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Disclaimer:''' This is a ''very'' large topic, and much too broad to be covered on a single support page. There are many other sites (yes, entire sites) which cover the topic in more detail. We'll link so some of them below. This page is meant to be just the essentials.&lt;br /&gt;
&lt;br /&gt;
== Logging in for the first time ==&lt;br /&gt;
To login to Beocat, you first need an &amp;quot;SSH Client&amp;quot;. [[wikipedia:Secure_Shell|SSH]] (short for &amp;quot;secure shell&amp;quot;) is a protocol that allows secure communication between two computers. We recommend the following.&lt;br /&gt;
* Windows&lt;br /&gt;
** [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY] is by far the most common SSH client, both for Beocat and in the world.&lt;br /&gt;
** [http://mobaxterm.mobatek.net/ MobaXterm] is a fairly new client with some nice features, such as being able to SCP/SFTP (see below), and running X (which isn't terribly useful on Beocat, but might be if you connect to other Linux hosts).&lt;br /&gt;
** [http://www.cygwin.com/ Cygwin] is for those that would rather be running Linux but are stuck on Windows. It's purely a text interface.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** OS-X has SSH a built-in application called &amp;quot;Terminal&amp;quot;. It's not great, but it will work for most Beocat users.&lt;br /&gt;
** [http://www.iterm2.com/#/section/home iTerm2] is the terminal application we prefer.&lt;br /&gt;
* Others&lt;br /&gt;
** There are [[wikipedia:Comparison_of_SSH_clients|many SSH clients]] for many different platforms available. While we don't have experience with many of these, any should be sufficient for access to Beocat.&lt;br /&gt;
&lt;br /&gt;
You'll need to connect your client (via the SSH protocol, if your client allows multiple protocols) to headnode.beocat.ksu.edu.&lt;br /&gt;
&lt;br /&gt;
For command-line tools, the command to connect is&lt;br /&gt;
 ssh ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
Your username is your [http://eid.ksu.edu K-State eID] name and the password is your eID password.&lt;br /&gt;
&lt;br /&gt;
'''Note:''' When you type your password, nothing shows up on the screen, not even asterisks.&lt;br /&gt;
&lt;br /&gt;
You'll know you are successfully logged in when you see a prompt that says&lt;br /&gt;
 [''username''@''machinename'' ~]$&lt;br /&gt;
where ''machinename'' is the name of the machine you've logged into (currently either 'eos' or 'selene') and ''username'' is your eID username&lt;br /&gt;
&lt;br /&gt;
== Transferring files (SCP or SFTP) ==&lt;br /&gt;
Usually, one of the first things people want to do is to transfer files into or out of Beocat. To do so, you need to use [[wikipedia:Secure_copy|SCP]] (secure copy) or [[wikipedia:SSH_File_Transfer_Protocol|SFTP]] (SSH FTP or Secure FTP). Again, there are multiple programs that do this.&lt;br /&gt;
* Windows&lt;br /&gt;
** Putty (see above) has PSCP and PSFTP programs (both are included if you run the installer). It is a command-line interface (CLI) rather than a graphical user interface (GUI).&lt;br /&gt;
** MobaXterm (see above) has a built-in GUI SFTP client that automatically changes the directories as you change them in your SSH session.&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] (client) has an easy-to-use GUI. Be sure to use 'SFTP' mode rather than 'FTP' mode.&lt;br /&gt;
** [http://winscp.net/eng/index.php WinSCP] is another easy-to-use GUI.&lt;br /&gt;
** Cygwin (see above) has CLI scp and sftp programs.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] is also available for OS-X.&lt;br /&gt;
** Within terminal or iTerm, you can use the 'scp' or 'sftp' programs.&lt;br /&gt;
* Linux&lt;br /&gt;
** FileZilla also has a GUI linux version, in addition to the CLI tools.&lt;br /&gt;
&lt;br /&gt;
=== Using a Command-Line Interface (CLI) ===&lt;br /&gt;
You can safely ignore this section if you're using a graphical interface (GUI). We highly recommend using a GUI when first learning how to use Beocat.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;First test case&amp;lt;/u&amp;gt;: transfer a file called myfile.txt in your current folder to your home directory on Beocat. For these examples, I use bold text to show what you type and plain text to show Beocat's response&lt;br /&gt;
&lt;br /&gt;
Using SCP:&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Note the colon at the end of the 'scp' line.&lt;br /&gt;
&lt;br /&gt;
Using SFTP&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/kylehutson/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
SFTP is interactive, so this is a two-step process. First, you connect to Beocat, then you transfer the file. As long as the system gives the &amp;lt;code&amp;gt;sftp&amp;gt; &amp;lt;/code&amp;gt; prompt, you are in the sftp program, and you will remain there until you type 'exit'.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Second test case:&amp;lt;/u&amp;gt; transfer a file called myfile.txt in your current folder to a diretory named 'mydirectory' under your home directory on Beocat.&lt;br /&gt;
&lt;br /&gt;
Here we run into one of the problems with scp - there is no easy way of creating 'mydirectory' if it doesn't already exist. If it does not already exist, you must login via ssh (as seen above) and create the directory using the 'mkdir' command (see Common Linux Commands) below.&lt;br /&gt;
&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 &lt;br /&gt;
An alternative version. If the colon is immediately followed by a slash, the directory name is taken from the root, rather than your home directory. So, given that your home directory on Beocat is /homes/''username'', we could instead type&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:/homes/''username''/mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 sftp ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''mkdir mydirectory'''&lt;br /&gt;
 [Note, if this directory already exists, you will get the response &amp;quot;Couldn't create directory: Failure&amp;quot;]&lt;br /&gt;
 sftp&amp;gt; '''cd mydirectory'''&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/''username''/mydirectory/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''quit'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Third test case:&amp;lt;/u&amp;gt; copy myfile.txt from your home directory on Beocat to your current folder.&lt;br /&gt;
&lt;br /&gt;
Using scp:&lt;br /&gt;
 scp ''username''@headnode.beocat.ksu.edu:myfile.txt .&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected toheadnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''get myfile.txt'''&lt;br /&gt;
 Fetching /homes/''username''/myfile.txt to myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
== Basic Linux Commands ==&lt;br /&gt;
Again, this guide is very limited, mostly limited to directory navigation and basic file commands. [http://www.ee.surrey.ac.uk/Teaching/Unix/ Here] is a pretty decent tutorial if you want to dig deeper. If you want more, entire books have been written on the subject.&lt;br /&gt;
&lt;br /&gt;
=== The Lingo ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!''Term''&lt;br /&gt;
!''Definition''&lt;br /&gt;
|-&lt;br /&gt;
|Directory&lt;br /&gt;
|A &amp;quot;Folder&amp;quot; in Windows or OS-X terms. A location where files or other directories are stored. The current directory is sometimes represented as `.` and the parent directory can be referenced as `..`&lt;br /&gt;
|-&lt;br /&gt;
|Shell&lt;br /&gt;
|The interface or environment under which you can run commands. There is a section below on shells&lt;br /&gt;
|-&lt;br /&gt;
|SSH&lt;br /&gt;
|Secure Shell. A protocol that encrypts data and can give access to another system, usually by a username and password&lt;br /&gt;
|-&lt;br /&gt;
|SCP&lt;br /&gt;
|Secure Copy. Copying to or from a remote system using part of SSH&lt;br /&gt;
|-&lt;br /&gt;
|path&lt;br /&gt;
|The list of directories which are searched when you type the name of a program. There is a section below on this&lt;br /&gt;
|-&lt;br /&gt;
|ownership&lt;br /&gt;
|Every file and directory has an user and a group attached to it, called its owners. These affect permissions.&lt;br /&gt;
|-&lt;br /&gt;
|permissions&lt;br /&gt;
|The ability to read, write, and/or execute a file. Permissions are based on ownership&lt;br /&gt;
|-&lt;br /&gt;
|switches&lt;br /&gt;
|Modifiers to a command-line program, usually in the form of -(letter) or --``(word). Several examples are given below, such as the '-a' on the 'ls' command&lt;br /&gt;
|-&lt;br /&gt;
|pipes and redirects&lt;br /&gt;
|Changes the input (often called 'stdin') and/or output (often called stdout) to a program or a file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Linux Command Line Cheat Sheet ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+File System Navigation&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
!''Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|pwd&lt;br /&gt;
|&amp;quot;Print working directory&amp;quot;, Where am I now?&lt;br /&gt;
|&amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;/homes/mozes&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls&lt;br /&gt;
|Lists files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -lh&lt;br /&gt;
|Lists files and folders with perms size and ownership&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -lh ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;-rw-r--r--  1 mozes    mozes_users   1    Jul 13  2011 NewFile&lt;br /&gt;
drwxr-xr-x  9 mozes    mozes_users   9.0K Apr 12  2010 NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -a&lt;br /&gt;
|Lists all files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -a ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;. .. .bashrc .bash_profile .tcshrc NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cd&lt;br /&gt;
|Changes directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ..&lt;br /&gt;
|Changes to parent directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ..&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd -&lt;br /&gt;
|Changes to the previous directory you were in&lt;br /&gt;
|&amp;lt;code&amp;gt;cd -&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ~&lt;br /&gt;
|Changes to your home directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ~&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Working with files&lt;br /&gt;
|-&lt;br /&gt;
!Command'&lt;br /&gt;
!What it does&lt;br /&gt;
!Example Usage'&lt;br /&gt;
!Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|file&lt;br /&gt;
|Identifies the type of object a file is&lt;br /&gt;
|&amp;lt;code&amp;gt;file NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile: a /usr/bin/python script, ASCII text executable&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cat&lt;br /&gt;
|Prints the contents of one or more files&lt;br /&gt;
|&amp;lt;code&amp;gt;cat NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;This is line one&lt;br /&gt;
This is line two&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp&lt;br /&gt;
|copy a file&lt;br /&gt;
|&amp;lt;code&amp;gt;cp OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp -i&lt;br /&gt;
|copy a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -i OldFile NewFile}&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp -r&lt;br /&gt;
|copy a directory, including contents&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -r OldFolder NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv&lt;br /&gt;
|move, or rename, a file&lt;br /&gt;
|&amp;lt;code&amp;gt;mv OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv -i&lt;br /&gt;
|move, or rename, a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;mv -i OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm&lt;br /&gt;
|remove a file&lt;br /&gt;
|&amp;lt;code&amp;gt;rm NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rm -i&lt;br /&gt;
|remove a file, ask to be sure (useful with -r)&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -i NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;remove NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm -r&lt;br /&gt;
|remove a direcory and its contents&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -r NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mkdir&lt;br /&gt;
|creates a directory&lt;br /&gt;
|&amp;lt;code&amp;gt;mkdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rmdir&lt;br /&gt;
|removes an empty directory&lt;br /&gt;
|&amp;lt;code&amp;gt;rmdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|touch&lt;br /&gt;
|creates an empty file&lt;br /&gt;
|&amp;lt;code&amp;gt;touch TempFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Finding files and directories with [http://linux.die.net/man/1/find find]&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt;&lt;br /&gt;
| finds all files and folders within &amp;lt;directory&amp;gt;&lt;br /&gt;
| find ~/&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '&amp;lt;filename&amp;gt;'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that match &amp;lt;filename&amp;gt;&lt;br /&gt;
| find ~/ -iname 'hello.qsub'&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '*&amp;lt;partialmatch&amp;gt;*'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that partially match &amp;lt;partialmatch&amp;gt;&lt;br /&gt;
| find ~/ -iname '*.qsub*'&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Other useful commands include &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt; followed by a command name above will give you the manual page for the specified command full of many other useful options for the command. &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt; will give you an overview of the processes currently being run on the host you are connected to. &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt; allows you to page through files and see their contents using &amp;lt;PgUp&amp;gt; and &amp;lt;PgDn&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Editing Text Files ===&lt;br /&gt;
If you're new to Linux, the editor you will probably want to use is 'nano'. It works much the same as 'Notepad' in Windows or 'textedit' on OS-X. Note that you cannot use your mouse to change position within the document as you can with your local computer. You must use the arrow keys, instead.&lt;br /&gt;
&lt;br /&gt;
So, if I wanted to edit my .bashrc (as shown below), and I was already in my home directory (see above), I would type&lt;br /&gt;
 nano .bashrc&lt;br /&gt;
&lt;br /&gt;
While in nano, there is a list of actions you can take at the bottom of the screen. &amp;lt;Ctrl&amp;gt; is represented by a caret (`^`), so to exit (labeled as `^`X at the bottom of the screen), I would type &amp;lt;ctrl&amp;gt;-x. This action prompts you whether you want to save and exit (Y), lose changes and exit (N), or cancel and go back to editing (&amp;lt;ctrl&amp;gt;-c).&lt;br /&gt;
&lt;br /&gt;
If you do a significant amount of text editing in Linux, you'll probably want to switch to a more powerful editor, such as vim. The usage of vim is beyond the scope of this document. It is not at all intuitive to the beginning user, but with a little practice it becomes a much faster way of editing text files. If you're interested in using vim, [http://www.openvim.com/tutorial.html there is a nice tutorial here].&lt;br /&gt;
&lt;br /&gt;
=== Shells ===&lt;br /&gt;
==== What is a Shell? ====&lt;br /&gt;
In this case, I don't believe I can do a better job explaining shells than [[wikipedia:Shell_(computing)|this]].&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
Elsewhere at Kansas State University, the default Shell is set to tcsh. tcsh stands for &amp;quot;TENEX C SHell.&amp;quot; It is considered a replacement for csh and uses many of the same features. If you have experience with either csh or tcsh you'll probably feel right at home. This was the default shell until July of 2013. If you had an account before then, it is probably still tcsh.&lt;br /&gt;
&lt;br /&gt;
But what if you don't want or like tcsh, what can you do? Well, we have other shells available of Beocat as well.&lt;br /&gt;
==== bash ====&lt;br /&gt;
[http://www.gnu.org/software/bash/ Bash] seems to be the defacto standard shell in most Linux installs today. Bash is common and probably what most of you are used to. As of July 2013, bash is our new default shell. All new users will be set to bash initially. [https://software-carpentry.org/ Software Carpentry] teaches classes on several subjects specifically targeting researchers, including the bash shell. Their documentation is all freely available. [http://swcarpentry.github.io/shell-novice/ Here is a link to their excellent tutorial on using BASH.] Most of our documentation assumes you are using BASH.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;bash configuration files:&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This section gets into some minutiae with the way our job scheduler interacts with bash. If you're trying to solve a problem, read on, otherwise you can probably skip this section.&lt;br /&gt;
&lt;br /&gt;
Bash has 3 user configurable configuration files, &amp;lt;code&amp;gt;~/.bashrc ~/.bash_profile&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_logout&amp;lt;/code&amp;gt;. We'll look at the two more relevant ones &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Bash has 3 ways of looking at things, '''login''', '''interactive''', or '''none'''.&lt;br /&gt;
&lt;br /&gt;
Normally what happens is that shells that are '''interactive''' read &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;, shells that are '''login''' read &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. '''none''' shells read neither.&lt;br /&gt;
&lt;br /&gt;
sbatch jobs are '''login''', srun jobs are '''login+interactive''', logging into Beocat in a way that you can enter commands is '''login+interactive'''. There are very few cases that you will get '''none'''. For any session that isn't '''interactive''', your sourced files cannot output anything to the screen, or else it can break scp or sftp file transfers.&lt;br /&gt;
&lt;br /&gt;
If they are ''quiet'' statements, and you want them in all shells, you can put them in your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. If they are not ''quiet'' or they output ''anything'' to the screen, you must put them in your &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
[http://zsh.sourceforge.net/ zsh] is an alternative to bash and tcsh. It tends to support more complex features than either of the other two while using a syntax remarkably similar to bash. Unless specifically noted, when we specify '''Change your shell to bash''', &amp;lt;tt&amp;gt;zsh&amp;lt;/tt&amp;gt; should work as well.&lt;br /&gt;
&lt;br /&gt;
==== Changing Shells ====&lt;br /&gt;
Previously, we gave you the option of using a &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; to modify your shell. This is no longer supported, if you have issues with your shell/paths/environment variables we will ask you to delete your &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; file and change your shell via the method below.&lt;br /&gt;
&lt;br /&gt;
You can change your shell is via &amp;lt;code&amp;gt;chsh&amp;lt;/code&amp;gt; on either of the headnodes (eos/selene). This does not need to be re-done if you've changed to it to your preferred shell in the past.&lt;br /&gt;
&lt;br /&gt;
Use the appropriate of the following three lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
/usr/local/bin/chsh -s bash &amp;amp;&amp;amp; bash -l&lt;br /&gt;
/usr/local/bin/chsh -s tcsh &amp;amp;&amp;amp; tcsh -l&lt;br /&gt;
/usr/local/bin/chsh -s zsh &amp;amp;&amp;amp; zsh -l&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Changing your PATH ===&lt;br /&gt;
Typically, you don't have to change your PATH, but it is useful to know what your PATH is and what it does. The PATH is the list of directories which are searched when you type the name of a program. Note that by default the current directory is NOT included in the path, so if you were wanting to run a program called MyProgram in the current directory, you could NOT simply type 'MyProgram', you would instead type &amp;lt;code&amp;gt;'./MyProgram'&amp;lt;/code&amp;gt; (where the '.' represents the current directory).&lt;br /&gt;
&lt;br /&gt;
To find your PATH, we need to identify which shell you are using. If you do not know, run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
ps | awk '/sh/ {print $4}'&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .tcshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
setenv PATH /usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== bash ====&lt;br /&gt;
You'll need to edit a file in your home directory called .bashrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=/usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .zshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=&amp;quot;/usr/local/bin:$PATH&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Ownership and Permissions ===&lt;br /&gt;
Every file and directory has a user and group associated with it. You can view ownership information by using the '-l' switch on ls. By default on Beocat, files you create have a user ownership of your username (i.e., your eID) and a group ownership of your username_users. So, if I were logged in as 'myusername' and I had a single file in my home directory called MyProgram, the result of typing 'ls -l' would be something like this:&lt;br /&gt;
 total 0&lt;br /&gt;
 -rwxr-x--- 1 myusername myusername_users 79 May 31  2011 MyProgram&lt;br /&gt;
This tells us several things.&lt;br /&gt;
* The first column ('-rwxr-x---') is permissions (covered below)&lt;br /&gt;
* The second column ('1') is the number of links to this file. You can safely ignore this (unless you're both masochistic and interested in filesystem details)&lt;br /&gt;
* The third column ('myusername') shows the user ownership&lt;br /&gt;
* The fourth column ('myusername_users') shows the group ownership&lt;br /&gt;
* The fifth column ('79') gives the size of the file in bytes&lt;br /&gt;
* The next columns ('May 31  2011'), as you have probably guessed, gives the date the file was last changed&lt;br /&gt;
* The final column ('MyProgram') is the name of the file&lt;br /&gt;
&lt;br /&gt;
So why is this interesting to us? Because whenever things ''don't'' work, it's usually because of file ownership or permissions. Looking at these often gives us some useful diagnostic information.&lt;br /&gt;
&lt;br /&gt;
The permissions field shows us who has permissions to do what with this file. It is always 10 characters. The first character (-) is usually either a '-' for a regular file or a 'd' for a directory. The next 9 characters are broken into three groups of three, with each group showing read (r), write (w), and execute (x) permissions for the owner, group, and world, in that order.&lt;br /&gt;
* The first group (rwx) shows permissions for the owner (myusername). The owner here has read, write, and execute permissions&lt;br /&gt;
* The next group (r-x) shows permissions for the group (myusername_users). The group here has read and execute permissions, but cannot write.&lt;br /&gt;
* The last group (---) shows permissions for the rest of the world. The world has no permissions to read, write, or execute.&lt;br /&gt;
&lt;br /&gt;
When you create a shell script with a text editor, and sometimes when you copy programs to Beocat via SCP, the execute flag is not set. The permissions string may look more like (-rw-r--r--). To change this, you need to give yourself permission to execute this program. This is done with the 'chmod' (change mode) command. 'chmod' can have a long and confusing syntax, but since by far the most common problem is to give yourself execute permissions, here is the command to change that:&lt;br /&gt;
 chmod u+x MyProgram&lt;br /&gt;
This changes the permissions so that the user ('u', i.e., the owner) adds ('+') execute permission ('x').&lt;br /&gt;
&lt;br /&gt;
For more complex ownership or permissions changes, please feel free to contact the Beocat staff.&lt;br /&gt;
&lt;br /&gt;
=== Access Control Lists ===&lt;br /&gt;
Access Control Lists build on our knowledge and use of basic Linux permissions, so we'll cover those again:&lt;br /&gt;
&lt;br /&gt;
Linux permissions are typically broken down to ('''r''')ead, ('''w''')rite, and e('''x''')ecute split across 3 classes of accessors.&lt;br /&gt;
&lt;br /&gt;
'''Files'''&lt;br /&gt;
; read&lt;br /&gt;
: Read the file, pretty straight forward&lt;br /&gt;
; write&lt;br /&gt;
: Write to the file, including overwrite, truncation, etc.&lt;br /&gt;
; execute&lt;br /&gt;
: Execute the file, this permission allows you to run the file.&lt;br /&gt;
&lt;br /&gt;
'''Folders'''&lt;br /&gt;
; read&lt;br /&gt;
: List the directory, (ls)&lt;br /&gt;
; write&lt;br /&gt;
: Create new files and folders in the directory.&lt;br /&gt;
; execute&lt;br /&gt;
: Pass through the directory (cd into and through).&lt;br /&gt;
&lt;br /&gt;
Those accessors are ('''u''')ser, ('''g''')roup, and ('''o''')ther.&lt;br /&gt;
; user&lt;br /&gt;
: The user would typically be the user account that created the file or folder&lt;br /&gt;
; group&lt;br /&gt;
: The group would be that accounts primary group by default or can be changed by the user to be any group that they are a member of&lt;br /&gt;
; other&lt;br /&gt;
: Other is special, other is anything that doesn't meet either of the two other critera. We typically refer to them as world permissions, as they match ''everyone'' else.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, it is that &amp;quot;Other&amp;quot; permission that is a frequent problem. You may want to share some data with a colleague, but, from a security standpoint, you also may need to make sure that only that colleague has access to the data. If you aren't in the same group as the colleague, then, under standard Linux permissions, you have no other option except making the file &amp;quot;world&amp;quot; accessible.&lt;br /&gt;
&lt;br /&gt;
This is where &amp;lt;abbr title=&amp;quot;Access Control Lists&amp;quot;&amp;gt;ACLs&amp;lt;/abbr&amp;gt; come into play. ACLs are like the standard Linux permissions, except you can apply many of them, and you can allow individual users and groups to access alongside your own.&lt;br /&gt;
&lt;br /&gt;
ACLs can also do things that standard Linux permissions can't, like setting up &amp;quot;default&amp;quot; permissions for newly created files/folders within a directory.&lt;br /&gt;
&lt;br /&gt;
One big thing to be aware of for any permissions scheme is that permissions are checked at every level in a directory hierarchy.&lt;br /&gt;
&lt;br /&gt;
# /&lt;br /&gt;
# /homes&lt;br /&gt;
# /homes/$USER&lt;br /&gt;
# /homes/$USER/$SHARE&lt;br /&gt;
&lt;br /&gt;
If at any point the accessing user is denied permission, the traversal and access attempt will stop.&lt;br /&gt;
&lt;br /&gt;
==== Example 1 ====&lt;br /&gt;
Let's say I have a file in a directory that I want the user billy to be able to read. This file is &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt;. We'll look at the current permissions of the directory tree like so:&lt;br /&gt;
&lt;br /&gt;
We'll assume everyone has requisite permissions for &amp;lt;tt&amp;gt;/&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;/homes&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
First we'll check my home directory&lt;br /&gt;
 $ getfacl -e /homes/mozes&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x                      #effective:r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 mask::r-x&lt;br /&gt;
 other::---&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
 default:other::---&lt;br /&gt;
&lt;br /&gt;
If we make it past that permissions check, we'd go one level deeper.&lt;br /&gt;
 $ getfacl -e /homes/mozes/example&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r-x&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
&lt;br /&gt;
Finally we'd check if we had permission to access the file itself:&lt;br /&gt;
 $ getfacl -e /homes/mozes/example/input.file&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rw-&lt;br /&gt;
 group::r--&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r--&lt;br /&gt;
&lt;br /&gt;
There is quite a lot of information contained in the the above output, so lets look at and attempt to understand the contents.&lt;br /&gt;
&lt;br /&gt;
First, in each section, we see the POSIX owner and group as comments prefixed by '#' characters. These are what the respective user:: and group:: lines refer to when viewing the permissions.&lt;br /&gt;
&lt;br /&gt;
Second, we have lines related to the permissions of each accessor. These do what they say, show the permissions that an accessor would be granted. Please note there is a catch here, it seems to be most specific permission wins. This can come into play with granting a certain group access and then a specific member of the group a differering level of access.&lt;br /&gt;
&lt;br /&gt;
Third, on many of them there are lines prefixed with default: and then a permissions set. Default permissions are interesting. They are only set to directories and they define the starting set of acls that should be set when new files or folders are created within that folder.&lt;br /&gt;
&lt;br /&gt;
Finally, there is a mask, I'm not covering it because there are very few cases that people would need to use them.&lt;br /&gt;
&lt;br /&gt;
Back to the task at hand, We want billy to be able to read &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt;. Checking &amp;lt;tt&amp;gt;/homes/mozes&amp;lt;/tt&amp;gt;, we see that 'other' has no permissions, and billy has not been granted any special access.&lt;br /&gt;
&lt;br /&gt;
So we grant billy access &amp;quot;through&amp;quot; &amp;lt;tt&amp;gt;/homes/mozes&amp;lt;/tt&amp;gt;, granting the smallest set of permissions would be this:&lt;br /&gt;
 $ setfacl -m u:billy:x /homes/mozes&lt;br /&gt;
&lt;br /&gt;
Note, since I didn't give billy read access to my home directory, they wouldn't be able to &amp;lt;tt&amp;gt;ls /homes/mozes&amp;lt;/tt&amp;gt;, they can simply cd into it and through it.&lt;br /&gt;
&lt;br /&gt;
Then we check the rest of the permissions, &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt; has an 'other' permission granting (r)ead and e(x)ecute, so that shouldn't be an issue. &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt; allows 'other' to read it, so our job is done. Billy has access to read my file.&lt;br /&gt;
&lt;br /&gt;
If we decide later that billy needs to write to my file, we can grant them specific read/write permissions to just that file with:&lt;br /&gt;
 $ setfacl -m u:billy:rw /homes/mozes/example/input.file&lt;br /&gt;
&lt;br /&gt;
==== Example 2 ====&lt;br /&gt;
That's all well and good, but lets say we want all of my grad students to have read/write access to my example directory.&lt;br /&gt;
&lt;br /&gt;
Looking at the acls that have been set so far:&lt;br /&gt;
 $ getfacl -e /homes/mozes&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 user:billy:--x&lt;br /&gt;
 group::r-x                      #effective:r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 mask::r-x&lt;br /&gt;
 other::---&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
 default:other::---&lt;br /&gt;
&lt;br /&gt;
 $ getfacl -e /homes/mozes/example&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r-x&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
&lt;br /&gt;
We now want to grant my group of grad students the correct permissions to &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt;&lt;br /&gt;
 $ setfacl -R -m g:my_grad_students:rw -m d:g:my_grad_students:rw -m d:u:mozes:rw /homes/mozes/example&lt;br /&gt;
&lt;br /&gt;
There are a few things to note there:&lt;br /&gt;
* We're setting multiple acls at once (note the multiple -m arguments)&lt;br /&gt;
* We're setting those permissions recursively (on all files/folders nested anywhere in that directory hierarchy). The &amp;lt;tt&amp;gt;-R&amp;lt;/tt&amp;gt; option does this.&lt;br /&gt;
* We're setting some default permissions. Default permissions are prefixed with d:. Here we're saying that the (g)roup my_grad_students should be granted read/write permissions, we aslo set a default permission for ourselves. d:u:mozes:rw grants me read/write access to those files as if I were the owner, this is nice in the event that you're not a member of the my_grad_students group, it would make sure that you still retain a reasonable baseline of access.&lt;br /&gt;
&lt;br /&gt;
That all looks good, right? Except my grad students are complaining that they can't access &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt;. What did we forget?&lt;br /&gt;
&lt;br /&gt;
Permissions are checked at every level of the directory hierarchy, and we forgot to grant my_grad_students access through my home directory.&lt;br /&gt;
 $ setfacl -m g:my_grad_students:x /homes/mozes&lt;br /&gt;
&lt;br /&gt;
=== Manual (man) pages ===&lt;br /&gt;
Most commands have a complex set of switches that will modify the amount or type of information they display. To find out what switches are available, or how a program expects data, you can use the manual pages by typing &amp;quot;`man` ''command''&amp;quot;. Using one of the most common Linux commands, take a look the output of 'man ls'. It shows that it has over 50 switches available, ranging from which files to include, to how to display file sizes, to sort order and more. (I'm not pasting it here, because it's over 200 lines long!) To navigate a 'manpage', use the up-arrow and down-arrow keys. Press 'q' to quit.&lt;br /&gt;
&lt;br /&gt;
=== Pipes and Redirects ===&lt;br /&gt;
Typically a Linux program takes data from the keyboard and outputs data to the screen. In Unix and Linux terminology, the keyboard is the default 'stdin' (pronounced &amp;quot;standard in&amp;quot;) and the screen is the default 'stdout' (pronounced &amp;quot;standard out&amp;quot;). Many times, we want to take data from somewhere else (like a file, or the output of another program) and send it to yet another location. These redirectors are:&lt;br /&gt;
{|&lt;br /&gt;
|cmd &amp;gt; filename&lt;br /&gt;
|Redirect output from cmd to filename ||&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;gt;&amp;gt; filename&lt;br /&gt;
|Redirect output from cmd and append to filename&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;lt; filename&lt;br /&gt;
|Redirect input from cmd to filename&lt;br /&gt;
|-&lt;br /&gt;
| cmd1 &amp;amp;#124; cmd2&lt;br /&gt;
| Use the output from cmd1 as the input to cmd2&lt;br /&gt;
|}&lt;br /&gt;
Here is a quick example. Let's say I have a thousands of files in a directory, and I want a list of those that end in '.sh'&lt;br /&gt;
'ls' by itself scrolls so far I can't see even a fraction of them. So, I redirect the output to a file&lt;br /&gt;
 ls &amp;gt; ~/filelist.txt&lt;br /&gt;
That gives me all the files in the current folder and saves them in my home directory in 'filelist.txt'.&lt;br /&gt;
A quick look through the file in my favorite editor tells me this is still going to take too long, so I need another step. The 'grep' program is a commonly-used program to perform pattern matching. The syntax of 'grep' is beyond the scope of this document, but take my word for it that&lt;br /&gt;
 grep '\.sh$'&lt;br /&gt;
will return all lines that end in .sh.&lt;br /&gt;
&lt;br /&gt;
We can now redirect the input from grep to the file we just created:&lt;br /&gt;
 grep '\.sh$' &amp;lt; ~/filelist.txt&lt;br /&gt;
Great! We now have our list. However, we wanted to save this as filelist.txt, and instead we have another list that we have to copy-and-paste. Instead of redirecting to a file, we'll use the vertical bar '|' (which we often term a &amp;quot;pipe&amp;quot;) to send the output of one command to another.&lt;br /&gt;
 ls | grep '\.sh$' &amp;gt; ~/filelist.txt&lt;br /&gt;
This time the output of 'ls' is ''not'' redirected to a file, but is redirected to the next command (grep).  The output of grep (which is all our .sh files) instead of being sent to the screen is redirected to the file ~/filelist.txt.&lt;br /&gt;
&lt;br /&gt;
This example is a very simple demonstration of how pipes and redirects work. Many more examples with complex structures can be found at http://www.ibm.com/developerworks/linux/library/l-lpic1-v3-103-4/index.html&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=929</id>
		<title>LinuxBasics</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=929"/>
		<updated>2023-06-05T20:56:11Z</updated>

		<summary type="html">&lt;p&gt;Mozes: /* Access Control Lists */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Disclaimer:''' This is a ''very'' large topic, and much too broad to be covered on a single support page. There are many other sites (yes, entire sites) which cover the topic in more detail. We'll link so some of them below. This page is meant to be just the essentials.&lt;br /&gt;
&lt;br /&gt;
== Logging in for the first time ==&lt;br /&gt;
To login to Beocat, you first need an &amp;quot;SSH Client&amp;quot;. [[wikipedia:Secure_Shell|SSH]] (short for &amp;quot;secure shell&amp;quot;) is a protocol that allows secure communication between two computers. We recommend the following.&lt;br /&gt;
* Windows&lt;br /&gt;
** [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY] is by far the most common SSH client, both for Beocat and in the world.&lt;br /&gt;
** [http://mobaxterm.mobatek.net/ MobaXterm] is a fairly new client with some nice features, such as being able to SCP/SFTP (see below), and running X (which isn't terribly useful on Beocat, but might be if you connect to other Linux hosts).&lt;br /&gt;
** [http://www.cygwin.com/ Cygwin] is for those that would rather be running Linux but are stuck on Windows. It's purely a text interface.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** OS-X has SSH a built-in application called &amp;quot;Terminal&amp;quot;. It's not great, but it will work for most Beocat users.&lt;br /&gt;
** [http://www.iterm2.com/#/section/home iTerm2] is the terminal application we prefer.&lt;br /&gt;
* Others&lt;br /&gt;
** There are [[wikipedia:Comparison_of_SSH_clients|many SSH clients]] for many different platforms available. While we don't have experience with many of these, any should be sufficient for access to Beocat.&lt;br /&gt;
&lt;br /&gt;
You'll need to connect your client (via the SSH protocol, if your client allows multiple protocols) to headnode.beocat.ksu.edu.&lt;br /&gt;
&lt;br /&gt;
For command-line tools, the command to connect is&lt;br /&gt;
 ssh ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
Your username is your [http://eid.ksu.edu K-State eID] name and the password is your eID password.&lt;br /&gt;
&lt;br /&gt;
'''Note:''' When you type your password, nothing shows up on the screen, not even asterisks.&lt;br /&gt;
&lt;br /&gt;
You'll know you are successfully logged in when you see a prompt that says&lt;br /&gt;
 [''username''@''machinename'' ~]$&lt;br /&gt;
where ''machinename'' is the name of the machine you've logged into (currently either 'eos' or 'selene') and ''username'' is your eID username&lt;br /&gt;
&lt;br /&gt;
== Transferring files (SCP or SFTP) ==&lt;br /&gt;
Usually, one of the first things people want to do is to transfer files into or out of Beocat. To do so, you need to use [[wikipedia:Secure_copy|SCP]] (secure copy) or [[wikipedia:SSH_File_Transfer_Protocol|SFTP]] (SSH FTP or Secure FTP). Again, there are multiple programs that do this.&lt;br /&gt;
* Windows&lt;br /&gt;
** Putty (see above) has PSCP and PSFTP programs (both are included if you run the installer). It is a command-line interface (CLI) rather than a graphical user interface (GUI).&lt;br /&gt;
** MobaXterm (see above) has a built-in GUI SFTP client that automatically changes the directories as you change them in your SSH session.&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] (client) has an easy-to-use GUI. Be sure to use 'SFTP' mode rather than 'FTP' mode.&lt;br /&gt;
** [http://winscp.net/eng/index.php WinSCP] is another easy-to-use GUI.&lt;br /&gt;
** Cygwin (see above) has CLI scp and sftp programs.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] is also available for OS-X.&lt;br /&gt;
** Within terminal or iTerm, you can use the 'scp' or 'sftp' programs.&lt;br /&gt;
* Linux&lt;br /&gt;
** FileZilla also has a GUI linux version, in addition to the CLI tools.&lt;br /&gt;
&lt;br /&gt;
=== Using a Command-Line Interface (CLI) ===&lt;br /&gt;
You can safely ignore this section if you're using a graphical interface (GUI). We highly recommend using a GUI when first learning how to use Beocat.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;First test case&amp;lt;/u&amp;gt;: transfer a file called myfile.txt in your current folder to your home directory on Beocat. For these examples, I use bold text to show what you type and plain text to show Beocat's response&lt;br /&gt;
&lt;br /&gt;
Using SCP:&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Note the colon at the end of the 'scp' line.&lt;br /&gt;
&lt;br /&gt;
Using SFTP&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/kylehutson/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
SFTP is interactive, so this is a two-step process. First, you connect to Beocat, then you transfer the file. As long as the system gives the &amp;lt;code&amp;gt;sftp&amp;gt; &amp;lt;/code&amp;gt; prompt, you are in the sftp program, and you will remain there until you type 'exit'.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Second test case:&amp;lt;/u&amp;gt; transfer a file called myfile.txt in your current folder to a diretory named 'mydirectory' under your home directory on Beocat.&lt;br /&gt;
&lt;br /&gt;
Here we run into one of the problems with scp - there is no easy way of creating 'mydirectory' if it doesn't already exist. If it does not already exist, you must login via ssh (as seen above) and create the directory using the 'mkdir' command (see Common Linux Commands) below.&lt;br /&gt;
&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 &lt;br /&gt;
An alternative version. If the colon is immediately followed by a slash, the directory name is taken from the root, rather than your home directory. So, given that your home directory on Beocat is /homes/''username'', we could instead type&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:/homes/''username''/mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 sftp ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''mkdir mydirectory'''&lt;br /&gt;
 [Note, if this directory already exists, you will get the response &amp;quot;Couldn't create directory: Failure&amp;quot;]&lt;br /&gt;
 sftp&amp;gt; '''cd mydirectory'''&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/''username''/mydirectory/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''quit'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Third test case:&amp;lt;/u&amp;gt; copy myfile.txt from your home directory on Beocat to your current folder.&lt;br /&gt;
&lt;br /&gt;
Using scp:&lt;br /&gt;
 scp ''username''@headnode.beocat.ksu.edu:myfile.txt .&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected toheadnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''get myfile.txt'''&lt;br /&gt;
 Fetching /homes/''username''/myfile.txt to myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
== Basic Linux Commands ==&lt;br /&gt;
Again, this guide is very limited, mostly limited to directory navigation and basic file commands. [http://www.ee.surrey.ac.uk/Teaching/Unix/ Here] is a pretty decent tutorial if you want to dig deeper. If you want more, entire books have been written on the subject.&lt;br /&gt;
&lt;br /&gt;
=== The Lingo ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!''Term''&lt;br /&gt;
!''Definition''&lt;br /&gt;
|-&lt;br /&gt;
|Directory&lt;br /&gt;
|A &amp;quot;Folder&amp;quot; in Windows or OS-X terms. A location where files or other directories are stored. The current directory is sometimes represented as `.` and the parent directory can be referenced as `..`&lt;br /&gt;
|-&lt;br /&gt;
|Shell&lt;br /&gt;
|The interface or environment under which you can run commands. There is a section below on shells&lt;br /&gt;
|-&lt;br /&gt;
|SSH&lt;br /&gt;
|Secure Shell. A protocol that encrypts data and can give access to another system, usually by a username and password&lt;br /&gt;
|-&lt;br /&gt;
|SCP&lt;br /&gt;
|Secure Copy. Copying to or from a remote system using part of SSH&lt;br /&gt;
|-&lt;br /&gt;
|path&lt;br /&gt;
|The list of directories which are searched when you type the name of a program. There is a section below on this&lt;br /&gt;
|-&lt;br /&gt;
|ownership&lt;br /&gt;
|Every file and directory has an user and a group attached to it, called its owners. These affect permissions.&lt;br /&gt;
|-&lt;br /&gt;
|permissions&lt;br /&gt;
|The ability to read, write, and/or execute a file. Permissions are based on ownership&lt;br /&gt;
|-&lt;br /&gt;
|switches&lt;br /&gt;
|Modifiers to a command-line program, usually in the form of -(letter) or --``(word). Several examples are given below, such as the '-a' on the 'ls' command&lt;br /&gt;
|-&lt;br /&gt;
|pipes and redirects&lt;br /&gt;
|Changes the input (often called 'stdin') and/or output (often called stdout) to a program or a file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Linux Command Line Cheat Sheet ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+File System Navigation&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
!''Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|pwd&lt;br /&gt;
|&amp;quot;Print working directory&amp;quot;, Where am I now?&lt;br /&gt;
|&amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;/homes/mozes&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls&lt;br /&gt;
|Lists files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -lh&lt;br /&gt;
|Lists files and folders with perms size and ownership&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -lh ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;-rw-r--r--  1 mozes    mozes_users   1    Jul 13  2011 NewFile&lt;br /&gt;
drwxr-xr-x  9 mozes    mozes_users   9.0K Apr 12  2010 NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -a&lt;br /&gt;
|Lists all files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -a ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;. .. .bashrc .bash_profile .tcshrc NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cd&lt;br /&gt;
|Changes directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ..&lt;br /&gt;
|Changes to parent directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ..&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd -&lt;br /&gt;
|Changes to the previous directory you were in&lt;br /&gt;
|&amp;lt;code&amp;gt;cd -&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ~&lt;br /&gt;
|Changes to your home directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ~&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Working with files&lt;br /&gt;
|-&lt;br /&gt;
!Command'&lt;br /&gt;
!What it does&lt;br /&gt;
!Example Usage'&lt;br /&gt;
!Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|file&lt;br /&gt;
|Identifies the type of object a file is&lt;br /&gt;
|&amp;lt;code&amp;gt;file NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile: a /usr/bin/python script, ASCII text executable&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cat&lt;br /&gt;
|Prints the contents of one or more files&lt;br /&gt;
|&amp;lt;code&amp;gt;cat NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;This is line one&lt;br /&gt;
This is line two&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp&lt;br /&gt;
|copy a file&lt;br /&gt;
|&amp;lt;code&amp;gt;cp OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp -i&lt;br /&gt;
|copy a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -i OldFile NewFile}&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp -r&lt;br /&gt;
|copy a directory, including contents&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -r OldFolder NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv&lt;br /&gt;
|move, or rename, a file&lt;br /&gt;
|&amp;lt;code&amp;gt;mv OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv -i&lt;br /&gt;
|move, or rename, a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;mv -i OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm&lt;br /&gt;
|remove a file&lt;br /&gt;
|&amp;lt;code&amp;gt;rm NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rm -i&lt;br /&gt;
|remove a file, ask to be sure (useful with -r)&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -i NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;remove NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm -r&lt;br /&gt;
|remove a direcory and its contents&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -r NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mkdir&lt;br /&gt;
|creates a directory&lt;br /&gt;
|&amp;lt;code&amp;gt;mkdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rmdir&lt;br /&gt;
|removes an empty directory&lt;br /&gt;
|&amp;lt;code&amp;gt;rmdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|touch&lt;br /&gt;
|creates an empty file&lt;br /&gt;
|&amp;lt;code&amp;gt;touch TempFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Finding files and directories with [http://linux.die.net/man/1/find find]&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt;&lt;br /&gt;
| finds all files and folders within &amp;lt;directory&amp;gt;&lt;br /&gt;
| find ~/&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '&amp;lt;filename&amp;gt;'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that match &amp;lt;filename&amp;gt;&lt;br /&gt;
| find ~/ -iname 'hello.qsub'&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '*&amp;lt;partialmatch&amp;gt;*'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that partially match &amp;lt;partialmatch&amp;gt;&lt;br /&gt;
| find ~/ -iname '*.qsub*'&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Other useful commands include &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt; followed by a command name above will give you the manual page for the specified command full of many other useful options for the command. &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt; will give you an overview of the processes currently being run on the host you are connected to. &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt; allows you to page through files and see their contents using &amp;lt;PgUp&amp;gt; and &amp;lt;PgDn&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Editing Text Files ===&lt;br /&gt;
If you're new to Linux, the editor you will probably want to use is 'nano'. It works much the same as 'Notepad' in Windows or 'textedit' on OS-X. Note that you cannot use your mouse to change position within the document as you can with your local computer. You must use the arrow keys, instead.&lt;br /&gt;
&lt;br /&gt;
So, if I wanted to edit my .bashrc (as shown below), and I was already in my home directory (see above), I would type&lt;br /&gt;
 nano .bashrc&lt;br /&gt;
&lt;br /&gt;
While in nano, there is a list of actions you can take at the bottom of the screen. &amp;lt;Ctrl&amp;gt; is represented by a caret (`^`), so to exit (labeled as `^`X at the bottom of the screen), I would type &amp;lt;ctrl&amp;gt;-x. This action prompts you whether you want to save and exit (Y), lose changes and exit (N), or cancel and go back to editing (&amp;lt;ctrl&amp;gt;-c).&lt;br /&gt;
&lt;br /&gt;
If you do a significant amount of text editing in Linux, you'll probably want to switch to a more powerful editor, such as vim. The usage of vim is beyond the scope of this document. It is not at all intuitive to the beginning user, but with a little practice it becomes a much faster way of editing text files. If you're interested in using vim, [http://www.openvim.com/tutorial.html there is a nice tutorial here].&lt;br /&gt;
&lt;br /&gt;
=== Shells ===&lt;br /&gt;
==== What is a Shell? ====&lt;br /&gt;
In this case, I don't believe I can do a better job explaining shells than [[wikipedia:Shell_(computing)|this]].&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
Elsewhere at Kansas State University, the default Shell is set to tcsh. tcsh stands for &amp;quot;TENEX C SHell.&amp;quot; It is considered a replacement for csh and uses many of the same features. If you have experience with either csh or tcsh you'll probably feel right at home. This was the default shell until July of 2013. If you had an account before then, it is probably still tcsh.&lt;br /&gt;
&lt;br /&gt;
But what if you don't want or like tcsh, what can you do? Well, we have other shells available of Beocat as well.&lt;br /&gt;
==== bash ====&lt;br /&gt;
[http://www.gnu.org/software/bash/ Bash] seems to be the defacto standard shell in most Linux installs today. Bash is common and probably what most of you are used to. As of July 2013, bash is our new default shell. All new users will be set to bash initially. [https://software-carpentry.org/ Software Carpentry] teaches classes on several subjects specifically targeting researchers, including the bash shell. Their documentation is all freely available. [http://swcarpentry.github.io/shell-novice/ Here is a link to their excellent tutorial on using BASH.] Most of our documentation assumes you are using BASH.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;bash configuration files:&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This section gets into some minutiae with the way our job scheduler interacts with bash. If you're trying to solve a problem, read on, otherwise you can probably skip this section.&lt;br /&gt;
&lt;br /&gt;
Bash has 3 user configurable configuration files, &amp;lt;code&amp;gt;~/.bashrc ~/.bash_profile&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_logout&amp;lt;/code&amp;gt;. We'll look at the two more relevant ones &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Bash has 3 ways of looking at things, '''login''', '''interactive''', or '''none'''.&lt;br /&gt;
&lt;br /&gt;
Normally what happens is that shells that are '''interactive''' read &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;, shells that are '''login''' read &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. '''none''' shells read neither.&lt;br /&gt;
&lt;br /&gt;
sbatch jobs are '''login''', srun jobs are '''login+interactive''', logging into Beocat in a way that you can enter commands is '''login+interactive'''. There are very few cases that you will get '''none'''. For any session that isn't '''interactive''', your sourced files cannot output anything to the screen, or else it can break scp or sftp file transfers.&lt;br /&gt;
&lt;br /&gt;
If they are ''quiet'' statements, and you want them in all shells, you can put them in your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. If they are not ''quiet'' or they output ''anything'' to the screen, you must put them in your &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
[http://zsh.sourceforge.net/ zsh] is an alternative to bash and tcsh. It tends to support more complex features than either of the other two while using a syntax remarkably similar to bash. Unless specifically noted, when we specify '''Change your shell to bash''', &amp;lt;tt&amp;gt;zsh&amp;lt;/tt&amp;gt; should work as well.&lt;br /&gt;
&lt;br /&gt;
==== Changing Shells ====&lt;br /&gt;
Previously, we gave you the option of using a &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; to modify your shell. This is no longer supported, if you have issues with your shell/paths/environment variables we will ask you to delete your &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; file and change your shell via the method below.&lt;br /&gt;
&lt;br /&gt;
You can change your shell is via &amp;lt;code&amp;gt;chsh&amp;lt;/code&amp;gt; on either of the headnodes (eos/selene). This does not need to be re-done if you've changed to it to your preferred shell in the past.&lt;br /&gt;
&lt;br /&gt;
Use the appropriate of the following three lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
/usr/local/bin/chsh -s bash &amp;amp;&amp;amp; bash -l&lt;br /&gt;
/usr/local/bin/chsh -s tcsh &amp;amp;&amp;amp; tcsh -l&lt;br /&gt;
/usr/local/bin/chsh -s zsh &amp;amp;&amp;amp; zsh -l&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Changing your PATH ===&lt;br /&gt;
Typically, you don't have to change your PATH, but it is useful to know what your PATH is and what it does. The PATH is the list of directories which are searched when you type the name of a program. Note that by default the current directory is NOT included in the path, so if you were wanting to run a program called MyProgram in the current directory, you could NOT simply type 'MyProgram', you would instead type &amp;lt;code&amp;gt;'./MyProgram'&amp;lt;/code&amp;gt; (where the '.' represents the current directory).&lt;br /&gt;
&lt;br /&gt;
To find your PATH, we need to identify which shell you are using. If you do not know, run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
ps | awk '/sh/ {print $4}'&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .tcshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
setenv PATH /usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== bash ====&lt;br /&gt;
You'll need to edit a file in your home directory called .bashrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=/usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .zshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=&amp;quot;/usr/local/bin:$PATH&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Ownership and Permissions ===&lt;br /&gt;
Every file and directory has a user and group associated with it. You can view ownership information by using the '-l' switch on ls. By default on Beocat, files you create have a user ownership of your username (i.e., your eID) and a group ownership of your username_users. So, if I were logged in as 'myusername' and I had a single file in my home directory called MyProgram, the result of typing 'ls -l' would be something like this:&lt;br /&gt;
 total 0&lt;br /&gt;
 -rwxr-x--- 1 myusername myusername_users 79 May 31  2011 MyProgram&lt;br /&gt;
This tells us several things.&lt;br /&gt;
* The first column ('-rwxr-x---') is permissions (covered below)&lt;br /&gt;
* The second column ('1') is the number of links to this file. You can safely ignore this (unless you're both masochistic and interested in filesystem details)&lt;br /&gt;
* The third column ('myusername') shows the user ownership&lt;br /&gt;
* The fourth column ('myusername_users') shows the group ownership&lt;br /&gt;
* The fifth column ('79') gives the size of the file in bytes&lt;br /&gt;
* The next columns ('May 31  2011'), as you have probably guessed, gives the date the file was last changed&lt;br /&gt;
* The final column ('MyProgram') is the name of the file&lt;br /&gt;
&lt;br /&gt;
So why is this interesting to us? Because whenever things ''don't'' work, it's usually because of file ownership or permissions. Looking at these often gives us some useful diagnostic information.&lt;br /&gt;
&lt;br /&gt;
The permissions field shows us who has permissions to do what with this file. It is always 10 characters. The first character (-) is usually either a '-' for a regular file or a 'd' for a directory. The next 9 characters are broken into three groups of three, with each group showing read (r), write (w), and execute (x) permissions for the owner, group, and world, in that order.&lt;br /&gt;
* The first group (rwx) shows permissions for the owner (myusername). The owner here has read, write, and execute permissions&lt;br /&gt;
* The next group (r-x) shows permissions for the group (myusername_users). The group here has read and execute permissions, but cannot write.&lt;br /&gt;
* The last group (---) shows permissions for the rest of the world. The world has no permissions to read, write, or execute.&lt;br /&gt;
&lt;br /&gt;
When you create a shell script with a text editor, and sometimes when you copy programs to Beocat via SCP, the execute flag is not set. The permissions string may look more like (-rw-r--r--). To change this, you need to give yourself permission to execute this program. This is done with the 'chmod' (change mode) command. 'chmod' can have a long and confusing syntax, but since by far the most common problem is to give yourself execute permissions, here is the command to change that:&lt;br /&gt;
 chmod u+x MyProgram&lt;br /&gt;
This changes the permissions so that the user ('u', i.e., the owner) adds ('+') execute permission ('x').&lt;br /&gt;
&lt;br /&gt;
For more complex ownership or permissions changes, please feel free to contact the Beocat staff.&lt;br /&gt;
&lt;br /&gt;
=== Access Control Lists ===&lt;br /&gt;
Access Control Lists build on our knowledge and use of basic Linux permissions, so we'll cover those again:&lt;br /&gt;
&lt;br /&gt;
Linux permissions are typically broken down to ('''r''')ead, ('''w''')rite, and e('''x''')ecute split across 3 classes of accessors.&lt;br /&gt;
&lt;br /&gt;
'''Files'''&lt;br /&gt;
; read&lt;br /&gt;
: Read the file, pretty straight forward&lt;br /&gt;
; write&lt;br /&gt;
: Write to the file, including overwrite, truncation, etc.&lt;br /&gt;
; execute&lt;br /&gt;
: Execute the file, this permission allows you to run the file.&lt;br /&gt;
&lt;br /&gt;
'''Folders'''&lt;br /&gt;
; read&lt;br /&gt;
: List the directory, (ls)&lt;br /&gt;
; write&lt;br /&gt;
: Create new files and folders in the directory.&lt;br /&gt;
; execute&lt;br /&gt;
: Pass through the directory (cd into and through).&lt;br /&gt;
&lt;br /&gt;
Those accessors are ('''u''')ser, ('''g''')roup, and ('''o''')ther.&lt;br /&gt;
; user&lt;br /&gt;
: The user would typically be the user account that created the file or folder&lt;br /&gt;
; group&lt;br /&gt;
: The group would be that accounts primary group by default or can be changed by the user to be any group that they are a member of&lt;br /&gt;
; other&lt;br /&gt;
: Other is special, other is anything that doesn't meet either of the two other critera. We typically refer to them as world permissions, as they match ''everyone'' else.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, it is that &amp;quot;Other&amp;quot; permission that is a frequent problem. You may want to share some data with a colleague, but, from a security standpoint, you also may need to make sure that only that colleague has access to the data. If you aren't in the same group as the colleague, then, under standard Linux permissions, you have no other option except making the file &amp;quot;world&amp;quot; accessible.&lt;br /&gt;
&lt;br /&gt;
This is where &amp;lt;abbr title=&amp;quot;Access Control Lists&amp;quot;&amp;gt;ACLs&amp;lt;/abbr&amp;gt; come into play. ACLs are like the standard Linux permissions, except you can apply many of them, and you can allow individual users and groups to access alongside your own.&lt;br /&gt;
&lt;br /&gt;
ACLs can also do things that standard Linux permissions can't, like setting up &amp;quot;default&amp;quot; permissions for newly created files/folders within a directory.&lt;br /&gt;
&lt;br /&gt;
One big thing to be aware of for any permissions scheme is that permissions are checked at every level in a directory hierarchy.&lt;br /&gt;
&lt;br /&gt;
# /&lt;br /&gt;
# /homes&lt;br /&gt;
# /homes/$USER&lt;br /&gt;
# /homes/$USER/$SHARE&lt;br /&gt;
&lt;br /&gt;
If at any point the accessing user is denied permission, the traversal and access attempt will stop.&lt;br /&gt;
&lt;br /&gt;
==== Example 1 ====&lt;br /&gt;
Let's say I have a file in a directory that I want the user billy to be able to read. This file is &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt;. We'll look at the current permissions of the directory tree like so:&lt;br /&gt;
&lt;br /&gt;
We'll assume everyone has requisite permissions for &amp;lt;tt&amp;gt;/&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;/homes&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
First we'll check my home directory&lt;br /&gt;
 $ getfacl -e /homes/mozes&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x                      #effective:r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 mask::r-x&lt;br /&gt;
 other::---&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
 default:other::---&lt;br /&gt;
&lt;br /&gt;
If we make it past that permissions check, we'd go one level deeper.&lt;br /&gt;
 $ getfacl -e /homes/mozes/example&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r-x&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
&lt;br /&gt;
Finally we'd check if we had permission to access the file itself:&lt;br /&gt;
 $ getfacl -e /homes/mozes/example/input.file&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rw-&lt;br /&gt;
 group::r--&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r--&lt;br /&gt;
&lt;br /&gt;
There is quite a lot of information contained in the the above output, so lets look at and attempt to understand the contents.&lt;br /&gt;
&lt;br /&gt;
First, in each section, we see the POSIX owner and group as comments prefixed by '#' characters. These are what the respective user:: and group:: lines refer to when viewing the permissions.&lt;br /&gt;
&lt;br /&gt;
Second, we have lines related to the permissions of each accessor. These do what they say, show the permissions that an accessor would be granted. Please note there is a catch here, it seems to be most specific permission wins. This can come into play with granting a certain group access and then a specific member of the group a differering level of access.&lt;br /&gt;
&lt;br /&gt;
Third, on many of them there are lines prefixed with default: and then a permissions set. Default permissions are interesting. They are only set to directories and they define the starting set of acls that should be set when new files or folders are created within that folder.&lt;br /&gt;
&lt;br /&gt;
Finally, there is a mask, I'm not covering it because there are very few cases that people would need to use them.&lt;br /&gt;
&lt;br /&gt;
Back to the task at hand, We want billy to be able to read &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt;. Checking &amp;lt;tt&amp;gt;/homes/mozes&amp;lt;/tt&amp;gt;, we see that 'other' has no permissions, and billy has not been granted any special access.&lt;br /&gt;
&lt;br /&gt;
So we grant billy access &amp;quot;through&amp;quot; &amp;lt;tt&amp;gt;/homes/mozes&amp;lt;/tt&amp;gt;, granting the smallest set of permissions would be this:&lt;br /&gt;
 $ setfacl -m u:billy:x /homes/mozes&lt;br /&gt;
&lt;br /&gt;
Note, since I didn't give billy read access to my home directory, they wouldn't be able to &amp;lt;tt&amp;gt;ls /homes/mozes&amp;lt;/tt&amp;gt;, they can simply cd into it and through it.&lt;br /&gt;
&lt;br /&gt;
Then we check the rest of the permissions, &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt; has an 'other' permission granting (r)ead and e(x)ecute, so that shouldn't be an issue. &amp;lt;tt&amp;gt;/homes/mozes/example/input.file&amp;lt;/tt&amp;gt; allows 'other' to read it, so our job is done. Billy has access to read my file.&lt;br /&gt;
&lt;br /&gt;
If we decide later that billy needs to write to my file, we can grant them specific read/write permissions to just that file with:&lt;br /&gt;
 $ setfacl -m u:billy:rw /homes/mozes/example/input.file&lt;br /&gt;
&lt;br /&gt;
==== Example 2 ====&lt;br /&gt;
That's all well and good, but lets say we want all of my grad students to have read/write access to my example directory.&lt;br /&gt;
&lt;br /&gt;
Looking at the acls that have been set so far:&lt;br /&gt;
 $ getfacl -e /homes/mozes&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 user:billy:--x&lt;br /&gt;
 group::r-x                      #effective:r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 mask::r-x&lt;br /&gt;
 other::---&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
 default:other::---&lt;br /&gt;
&lt;br /&gt;
 $ getfacl -e /homes/mozes/example&lt;br /&gt;
 # owner: mozes&lt;br /&gt;
 # group: mozes_users&lt;br /&gt;
 user::rwx&lt;br /&gt;
 group::r-x&lt;br /&gt;
 group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 other::r-x&lt;br /&gt;
 default:user::rwx&lt;br /&gt;
 default:group::r-x              #effective:r-x&lt;br /&gt;
 default:group:beocat_support:r-x        #effective:r-x&lt;br /&gt;
 default:mask::r-x&lt;br /&gt;
&lt;br /&gt;
We now want to grant my group of grad students the correct permissions to &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt;&lt;br /&gt;
 $ setfacl -R -m g:my_grad_students:rw -m d:g:my_grad_students:rw -m d:u:mozes:rw /homes/mozes/example&lt;br /&gt;
&lt;br /&gt;
There are a few things to note there:&lt;br /&gt;
* We're setting multiple acls at once (note the multiple -m arguments)&lt;br /&gt;
* We're setting those permissions recursively (on all files/folders nested anywhere in that directory hierarchy).&lt;br /&gt;
* We're setting some default permissions. Default permissions are prefixed with d:. Here we're saying that the (g)roup my_grad_students should be granted read/write permissions, we aslo set a default permission for ourselves. d:u:mozes:rw grants me read/write access to those files as if I were the owner, this is nice in the event that you're not a member of the my_grad_students group, it would make sure that you still retain a reasonable baseline of access.&lt;br /&gt;
&lt;br /&gt;
That all looks good, right? Except my grad students are complaining that they can't access &amp;lt;tt&amp;gt;/homes/mozes/example&amp;lt;/tt&amp;gt;. What did we forget?&lt;br /&gt;
&lt;br /&gt;
Permissions are checked at every level of the directory hierarchy, and we forgot to grant my_grad_students access through my home directory.&lt;br /&gt;
 $ setfacl -m g:my_grad_students:x /homes/mozes&lt;br /&gt;
&lt;br /&gt;
=== Manual (man) pages ===&lt;br /&gt;
Most commands have a complex set of switches that will modify the amount or type of information they display. To find out what switches are available, or how a program expects data, you can use the manual pages by typing &amp;quot;`man` ''command''&amp;quot;. Using one of the most common Linux commands, take a look the output of 'man ls'. It shows that it has over 50 switches available, ranging from which files to include, to how to display file sizes, to sort order and more. (I'm not pasting it here, because it's over 200 lines long!) To navigate a 'manpage', use the up-arrow and down-arrow keys. Press 'q' to quit.&lt;br /&gt;
&lt;br /&gt;
=== Pipes and Redirects ===&lt;br /&gt;
Typically a Linux program takes data from the keyboard and outputs data to the screen. In Unix and Linux terminology, the keyboard is the default 'stdin' (pronounced &amp;quot;standard in&amp;quot;) and the screen is the default 'stdout' (pronounced &amp;quot;standard out&amp;quot;). Many times, we want to take data from somewhere else (like a file, or the output of another program) and send it to yet another location. These redirectors are:&lt;br /&gt;
{|&lt;br /&gt;
|cmd &amp;gt; filename&lt;br /&gt;
|Redirect output from cmd to filename ||&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;gt;&amp;gt; filename&lt;br /&gt;
|Redirect output from cmd and append to filename&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;lt; filename&lt;br /&gt;
|Redirect input from cmd to filename&lt;br /&gt;
|-&lt;br /&gt;
| cmd1 &amp;amp;#124; cmd2&lt;br /&gt;
| Use the output from cmd1 as the input to cmd2&lt;br /&gt;
|}&lt;br /&gt;
Here is a quick example. Let's say I have a thousands of files in a directory, and I want a list of those that end in '.sh'&lt;br /&gt;
'ls' by itself scrolls so far I can't see even a fraction of them. So, I redirect the output to a file&lt;br /&gt;
 ls &amp;gt; ~/filelist.txt&lt;br /&gt;
That gives me all the files in the current folder and saves them in my home directory in 'filelist.txt'.&lt;br /&gt;
A quick look through the file in my favorite editor tells me this is still going to take too long, so I need another step. The 'grep' program is a commonly-used program to perform pattern matching. The syntax of 'grep' is beyond the scope of this document, but take my word for it that&lt;br /&gt;
 grep '\.sh$'&lt;br /&gt;
will return all lines that end in .sh.&lt;br /&gt;
&lt;br /&gt;
We can now redirect the input from grep to the file we just created:&lt;br /&gt;
 grep '\.sh$' &amp;lt; ~/filelist.txt&lt;br /&gt;
Great! We now have our list. However, we wanted to save this as filelist.txt, and instead we have another list that we have to copy-and-paste. Instead of redirecting to a file, we'll use the vertical bar '|' (which we often term a &amp;quot;pipe&amp;quot;) to send the output of one command to another.&lt;br /&gt;
 ls | grep '\.sh$' &amp;gt; ~/filelist.txt&lt;br /&gt;
This time the output of 'ls' is ''not'' redirected to a file, but is redirected to the next command (grep).  The output of grep (which is all our .sh files) instead of being sent to the screen is redirected to the file ~/filelist.txt.&lt;br /&gt;
&lt;br /&gt;
This example is a very simple demonstration of how pipes and redirects work. Many more examples with complex structures can be found at http://www.ibm.com/developerworks/linux/library/l-lpic1-v3-103-4/index.html&lt;/div&gt;</summary>
		<author><name>Mozes</name></author>
	</entry>
</feed>