<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://support.beocat.ksu.edu/BeocatDocs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nathanrwells</id>
	<title>Beocat - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://support.beocat.ksu.edu/BeocatDocs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Nathanrwells"/>
	<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/Docs/Special:Contributions/Nathanrwells"/>
	<updated>2026-04-04T11:35:41Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.8</generator>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Nautilus&amp;diff=1158</id>
		<title>Nautilus</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Nautilus&amp;diff=1158"/>
		<updated>2025-09-23T16:32:28Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= NOTICE =&lt;br /&gt;
&lt;br /&gt;
The Nautilus folks have made changes to how we access the nautilus portal. While we work to make the system compatible, access to Nautilus through Fiona will be offline.&lt;br /&gt;
&lt;br /&gt;
== Nautilus ==&lt;br /&gt;
To access the Nautilus namespace, login using K-State SSO at https://portal.nrp-nautilus.io/ . Once you have done so, contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] and request to be added to the Beocat Nautilus namespace (ksu-nrp-cluster). Once you have received notification that you have been added to the namespace, you can continue with the following steps to get set up to use the cluster resources. &lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;SSH into headnode.beocat.ksu.edu&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;SSH into fiona (fiona hosts the kubectl tool we will use for this)&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Once on fiona, use the command ‘cd ~’ to ensure you are in your home directory. If you&lt;br /&gt;
are not, this will return you to the top level of your home directory.&amp;lt;li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;From there you will need to create a .kube directory inside of your home directory. Use&lt;br /&gt;
the command ‘mkdir ~/.kube’&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login to https://portal.nrp-nautilus.io/ using the same login previously used to create your&lt;br /&gt;
account (this will be your K-State EID login)&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;From here it is MANDATORY to read the cluster policy documentation provided by the&lt;br /&gt;
National Research Platform for the Nautilus program. You can find this here.&lt;br /&gt;
https://docs.nationalresearchplatform.org/userdocs/start/policies/ &amp;lt;/li&amp;gt;&lt;br /&gt;
a. This is to ensure we do not break any of the rules put in place by the NRP.&lt;br /&gt;
&amp;lt;br&amp;gt;b. Return to https://portal.nrp-nautilus.io/ and accept the Acceptable Use Policy (AUP)&lt;br /&gt;
&amp;lt;li&amp;gt;Next, return to the website specified in step 5, in the top right corner of the page press&lt;br /&gt;
the “Get Config” option. &amp;lt;/li&amp;gt;&lt;br /&gt;
a. This will download a file called ‘config’&lt;br /&gt;
&amp;lt;li&amp;gt;You will need to move the file to your ~/.kube directory created in step 4.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. To do this you can copy and paste the contents through the command line&lt;br /&gt;
&amp;lt;br&amp;gt;b. You can also utilize the OpenOnDemand tool to upload the file through the web&lt;br /&gt;
interface. Information for this tool can be found here:&lt;br /&gt;
https://support.beocat.ksu.edu/Docs/OpenOnDemand&lt;br /&gt;
&amp;lt;br&amp;gt;c. You can also use other means of moving the contents to the Beocat&lt;br /&gt;
headnodes/your home directory, but these are just a few examples.&lt;br /&gt;
&amp;lt;br&amp;gt;d. NOTE: Because we added a period before the directory name it is now a hidden directory,&lt;br /&gt;
and the directory will not appear when running a normal ‘ls’, to see the directory you will&lt;br /&gt;
need to run “ls -a” or “ls -la”.&lt;br /&gt;
&amp;lt;li&amp;gt;Once you have read the required documentation, created the .kube directory in your&lt;br /&gt;
home directory, and placed the config file in the '~/.kube' directory, you are now ready to continue!&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Below is an example pod that can be used. It does not request much in the way of resources so you will likely need to change some things. Be sure to change the “name:” field&lt;br /&gt;
underneath “metadata:”. Change the text “test-pod” to “{eid}-pod” where ‘{eid}’ is your&lt;br /&gt;
K-State ID. It will look something like this “dan-pod”.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=yaml&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  name: test-pod&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - name: mypod&lt;br /&gt;
    image: ubuntu&lt;br /&gt;
    resources:&lt;br /&gt;
      limits:&lt;br /&gt;
        memory: 400Mi&lt;br /&gt;
        cpu: 100m&lt;br /&gt;
      requests:&lt;br /&gt;
        memory: 100Mi&lt;br /&gt;
        cpu: 100m&lt;br /&gt;
    command: [&amp;quot;sh&amp;quot;, &amp;quot;-c&amp;quot;, &amp;quot;echo 'Im a new pod'&amp;quot;]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Place your .yaml file in the same directory created earlier (~/.kube).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;If you are not already in the .kube directory enter the command “cd ~/.kube” to change&lt;br /&gt;
your current directory.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Now we are going to create our ‘pod’. This will request a ubuntu pc using the&lt;br /&gt;
specifications from above.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. To do this enter the command “kubectl create -f pod1.yaml” NOTE: You must be&lt;br /&gt;
in the same directory that you placed the pod1.yaml file in (in this situation, the above pod config was put into a file named pod1.yaml).&lt;br /&gt;
&amp;lt;br&amp;gt;b. If the command is successful you will see an output of “pod/{eid}-pod created”.&lt;br /&gt;
&amp;lt;li&amp;gt;You will need to wait until the container for the pod is finished creating. You can check&lt;br /&gt;
this by running “kubectl get pods”&amp;lt;/li&amp;gt;&lt;br /&gt;
a. Once you run this command, it will output all the pods currently running or being&lt;br /&gt;
created in the namespace. Look for yours among the list of pods, the name will&lt;br /&gt;
be the same name specified in step 10.&lt;br /&gt;
&amp;lt;br&amp;gt;b. Once you locate your pod, check its STATUS. If the pod says Running, then you&lt;br /&gt;
are good to proceed. If it says Container Creating, then you will need to wait just a&lt;br /&gt;
bit. It should not take long.&lt;br /&gt;
&amp;lt;li&amp;gt;You can now execute and enter the pod by running “kubectl exec -it {eid}-pod --&lt;br /&gt;
/bin/bash”. Where ‘{eid}-pod’ is the pod created in step 13/the name specified in step 10.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. Executing this command will open the pod you created and run a bash console&lt;br /&gt;
on the pod.&lt;br /&gt;
&amp;lt;br&amp;gt;b. NOTE: If you have trouble logging into the pod, and are met with a “You must be&lt;br /&gt;
logged in to the server, you can run “kubectl proxy”, and after a moment, you can&lt;br /&gt;
cancel the command with a “crtl+c”. This should remedy the error.&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional documentation for Kubernetes can be found on the Kubernetes website https://kubernetes.io/docs/home&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1147</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1147"/>
		<updated>2025-08-20T15:46:03Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Online Documentations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of RHEL Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
* We provide a short description of Beocat for the uses of a proposal or teaching here: [[ProposalDescription|Beocat Info]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
=== Online Documentations ===&lt;br /&gt;
&lt;br /&gt;
* Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/]&lt;br /&gt;
* Read about  [[Installed software]] and languages&lt;br /&gt;
* Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] and download the [[Media:Slurm-quick-reference.pdf|Slurm Quick Reference PDF]]&lt;br /&gt;
* Run interactive jobs with [[OpenOnDemand]]&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]]&lt;br /&gt;
* Big Data course on Beocat! [[BigDataOnBeocat]]&lt;br /&gt;
* Interested in web-based computational biology research? Check out [[GalaxyDocs|Galaxy!]]&lt;br /&gt;
* Looking to utilize the NRP (Nautilus cluster) namespace? Check out [[Nautilus|Nautilus on Beocat]]&lt;br /&gt;
* Need Bulk data storage on Beocat? We provide a no-backup large data storage solution, more information can be found here: [https://support.beocat.ksu.edu/Docs/AdvancedSlurm#Bulk_directory Beocat Bulk Directories]&lt;br /&gt;
&lt;br /&gt;
=== Training Videos and Slides ===&lt;br /&gt;
&lt;br /&gt;
* [https://www.youtube.com/watch?v=7NOB_HGQE0U Beocat Introduction] and [[Media:Beocat-Beoshock-Intro.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=b_yawpwFRdk Linux and Bash Introduction] and [[Media:Linux-Intro-cheatsheet.pdf|Linux Quick Reference PDF]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=vcC-DURbH6c Advanced HPC Usage] and [[Media:HPC-Advanced-Usage.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=inJbYdZacjs HPC Parallel Computing] and [[Media:HPC-Parallel-Computing.pdf|slides]]&lt;br /&gt;
&lt;br /&gt;
== Transferring data to Beocat ==&lt;br /&gt;
Transferring data to Beocat can be done through a variety of ways, we offer documentation on a few of them:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;b&amp;gt;With the recent changes to how KState handles DUO authentication, we recommend you use Globus to transfer files in and out of Beocat&amp;lt;/b&amp;gt;&lt;br /&gt;
* [[Globus]] - Instructions on transferring files using [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''.&lt;br /&gt;
* [[LinuxBasics]] - Under the 'Transferring files (SCP or SFTP)' section, we have information regarding SCP and SFTP implementation.&lt;br /&gt;
* [[OpenOnDemand]] - We offer GUI based file management through OpenOnDemand&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] - We also offer the ability to transfer data to and from OneDrive&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] or you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address or through TDX and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] or email us at beocat@cs.ksu.edu, please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=1146</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=1146"/>
		<updated>2025-08-20T15:43:29Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Bulk directory */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. 'kstat -g' will show you the GPU nodes and the jobs running on them.  To request a GPU node, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; for example to request 1 GPU for your job; if your job uses multiple nodes, the number of GPUs requested is per-node.  You can also request a given type of GPU (kstat -g -l to show types) by using &amp;lt;tt&amp;gt;--gres=gpu:geforce_gtx_1080_ti:1&amp;lt;/tt&amp;gt; for a 1080Ti GPU on the Wizards or Dwarves, &amp;lt;tt&amp;gt;--gres=gpu:quadro_gp100:1&amp;lt;/tt&amp;gt; for the P100 GPUs on Wizard20-21 that are best for 64-bit codes like Vasp.  Most of these GPU nodes are owned by various groups.  If you want access to GPU nodes and your group does not own any, we can add you to the &amp;lt;tt&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/tt&amp;gt; group that has priority on Dwarf36-39.  For more information on compiling CUDA code click on this [[CUDA]] link.&lt;br /&gt;
&lt;br /&gt;
A listing of the current types of gpus can be gathered with this command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
scontrol show nodes | grep CfgTRES | tr ',' '\n' | awk -F '[:=]' '/gres\/gpu:/ { print $2 }' | sort -u&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
At the time of this writing, that command produces this list:&lt;br /&gt;
* geforce_gtx_1080_ti&lt;br /&gt;
* geforce_rtx_2080_ti&lt;br /&gt;
* geforce_rtx_3090&lt;br /&gt;
* l40s&lt;br /&gt;
* quadro_gp100&lt;br /&gt;
* rtx_a4000&lt;br /&gt;
* rtx_a6000&lt;br /&gt;
&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
''Intra''node jobs run on many cores in the same node. These jobs can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or any programming language that has the concept of ''threads''. Often, your program will need to know how many cores you want it to use, and many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--nodes=1 --cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
''Inter''node jobs can utilize many cores on one or more nodes. Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--nodes=''n'' --ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=10 --ntasks=100&amp;lt;/tt&amp;gt; will give you a total of 100 cores across 10 nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=avx2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## There is one strict rule for guaranteeing Slurm reads all of your options:&lt;br /&gt;
## Do not put *any* lines above your resource requests that aren't either:&lt;br /&gt;
##    1) blank. (no other characters)&lt;br /&gt;
##    2) comments (lines must begin with '#')&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres=gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from contacting Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]&lt;br /&gt;
## to see how we can assist in getting your &lt;br /&gt;
## job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance.  Those needing more storage may purchase /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up. The /fastscratch file system is a zfs host with lots of NVME drives provide much faster&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to /fastscratch, the local disk on each node, or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in a purchased /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Bulk data storage may be provided at a cost of $45/TB/year billed monthly. A bulk directory is provided once you have contacted Beocat systems administrators in a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] and provided us with the appropriate billing information through your accounting department. We do not allow payments to be made through personal bank accounts or checks, bulk directories require a monthly billable account.&lt;br /&gt;
&lt;br /&gt;
===Fast Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /fastscratch file system is faster than /bulk or /homes.&lt;br /&gt;
In order to use fastscratch, you first need to make a directory for yourself.  &lt;br /&gt;
Fast Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the fastscratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /fastscratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  Use 'kstat -l -h' to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&lt;br /&gt;
# Once the move is complete, the Supervisor should limit the permissions for the directory again by removing the student's access:&lt;br /&gt;
chown $USER: -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
In the past, Beocat users have been allowed to keep their&lt;br /&gt;
/homes and /bulk directories open so that any other user could&lt;br /&gt;
access files.  In order to bring Beocat into alignment with&lt;br /&gt;
State of Kansas regulations and industry norms, all users must now have their /homes /bulk /scratch and /fastscratch directories&lt;br /&gt;
locked down from other users, but can still share files and directories within their group or with individual users&lt;br /&gt;
using group and individual ACLs (Access Control Lists) which will be explained below.&lt;br /&gt;
Beocat staff will be exempted from this&lt;br /&gt;
policy as we need to work freely with all users and will manage our&lt;br /&gt;
subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory with the setacls script===&lt;br /&gt;
&lt;br /&gt;
If you do not wish to share files or directories with other users, you do not need to do anything&lt;br /&gt;
as rwx access to others has already been removed.&lt;br /&gt;
If you want to share files or directories you can either use the '''setacls''' script or configure&lt;br /&gt;
the ACLs (Access Control Lists) manually.&lt;br /&gt;
&lt;br /&gt;
The '''setacls -h''' will show how to use the script.&lt;br /&gt;
  &lt;br /&gt;
  Eos: setacls -h&lt;br /&gt;
  setacls [-r] [-w] [-g group] [-u user] -d /full/path/to/directory&lt;br /&gt;
  Execute pemission will always be applied, you may also choose r or w&lt;br /&gt;
  Must specify at least one group or user&lt;br /&gt;
  Must specify at least one directory, and it must be the full path&lt;br /&gt;
  Example: setacls -r -g ksu-cis-hpc -u mozes -d /homes/daveturner/shared_dir&lt;br /&gt;
&lt;br /&gt;
You can specify the permissions to be either -r for read or -w for write or you can specify both.&lt;br /&gt;
You can provide a priority group to share with, which is the same as the group used in a --partition=&lt;br /&gt;
statement in a job submission script.  You can also specify users.&lt;br /&gt;
You can specify a file or a directory to share.  If the directory is specified then all files in that&lt;br /&gt;
directory will also be shared, and all files created in the directory laster will also be shared.&lt;br /&gt;
&lt;br /&gt;
The script will set everything up for you, telling you the commands it is executing along the way,&lt;br /&gt;
then show the resulting ACLs at the end with the '''getfacl''' command.  Below is an example of &lt;br /&gt;
sharing the directory '''test_directory''' in my /bulk/daveturner directory with Nathan.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
Beocat&amp;gt;  cd /bulk/daveturner&lt;br /&gt;
Beocat&amp;gt;  mkdir test_directory&lt;br /&gt;
Beocat&amp;gt;  setacls -r -w -u nathanrwells -d /bulk/daveturner/test_directory&lt;br /&gt;
&lt;br /&gt;
Opening up base directory /bulk/daveturner with X execute permission only&lt;br /&gt;
  setfacl -m u:nathanrwells:X /bulk/daveturner&lt;br /&gt;
&lt;br /&gt;
Setting Xrw for directory/file /bulk/daveturner/test_directory&lt;br /&gt;
  setfacl -m u:nathanrwells:Xrw -R /bulk/daveturner/test_directory&lt;br /&gt;
  setfacl -d -m u:nathanrwells:Xrw -R /bulk/daveturner/test_directory&lt;br /&gt;
&lt;br /&gt;
The ACLs on directory /bulk/daveturner/test_directory are set to:&lt;br /&gt;
&lt;br /&gt;
getfacl: Removing leading '/' from absolute path names&lt;br /&gt;
# file: bulk/daveturner/test_directory&lt;br /&gt;
USER   daveturner        rwx  rwx&lt;br /&gt;
user   nathanrwells      rwx  rwx&lt;br /&gt;
GROUP  daveturner_users  r-x  r-x&lt;br /&gt;
group  beocat_support    r-x  r-x&lt;br /&gt;
mask                     rwx  rwx&lt;br /&gt;
other                    ---  ---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The '''getfacl''' run by the script now shows that user '''nathanrwells''' has &lt;br /&gt;
read and write permissions to that directory and execute access to all directories&lt;br /&gt;
leading up to it.&lt;br /&gt;
&lt;br /&gt;
====Manually configuring your ACLs====&lt;br /&gt;
&lt;br /&gt;
If you want to manually configure the ACLs you can use the directions below to do what the '''setacls''' &lt;br /&gt;
script would do for you.&lt;br /&gt;
You first need to provide the minimum execute access to your /homes&lt;br /&gt;
or /bulk directory before sharing individual subdirectories.  Setting the ACL to execute only will allow those &lt;br /&gt;
in your group to get access to subdirectories while not including read access will mean they will not&lt;br /&gt;
be able to see other files or subdirectories on your main directory, but do keep in mind that they can still access them&lt;br /&gt;
so you may want to still lock them down manually.  Below is an example of how I would change my&lt;br /&gt;
/homes/daveturner directory to allow ksu-cis-hpc group execute access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:X /homes/daveturner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your research group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share_hpc', &lt;br /&gt;
then providing access to my ksu-cis.hpc group&lt;br /&gt;
(my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q).&lt;br /&gt;
Using -R will make these changes recursively to all files and directories in that subdirectory while changing the defaults with the setfacl -d command will ensure that files and directories created&lt;br /&gt;
later will be done so with these same ACLs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc&lt;br /&gt;
# ACLs are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share_hpc' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:ksu-cis-hpc:rwX -R share_hpc'&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
groups&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself&lt;br /&gt;
by emailing us at&lt;br /&gt;
 or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] or emailing us at beocat@cs.ksu.edu&lt;br /&gt;
.&lt;br /&gt;
If you want to share a directory with only a few people you can manage your ACLs using individual usernames&lt;br /&gt;
instead of with a group.&lt;br /&gt;
&lt;br /&gt;
You can use the '''getfacl''' command to see groups have access to a given directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
getfacl share_hpc&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ACLs give you great flexibility in controlling file access at the&lt;br /&gt;
group level.  Below is a more advanced example where I set up a directory to be shared with&lt;br /&gt;
my ksu-cis-hpc group, Dan's ksu-cis-dan group, and an individual user 'mozes' who I also want&lt;br /&gt;
to have write access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc_dan_mozes&lt;br /&gt;
# acls are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
getfacl share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc_dan_mozes&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  user:mozes:rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  group:ksu-cis-dan:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:user:mozes:rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:group:ksu-cis-dan:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::--x&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
cd&lt;br /&gt;
mkdir public_html&lt;br /&gt;
# Opt-in to letting the webserver access your home directory:&lt;br /&gt;
setfacl -m g:public_html:x ~/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
We have a page here dedicated to [[Globus]]&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --array specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint/Restart using DMTCP ==&lt;br /&gt;
&lt;br /&gt;
DMTCP is Distributed Multi-Threaded CheckPoint software that will checkpoint your application without modification, and&lt;br /&gt;
can be set up to automatically restart your job from the last checkpoint if for example the node you are running on fails.  &lt;br /&gt;
This has been tested successfully&lt;br /&gt;
on Beocat for some scalar and OpenMP codes, but has failed on all MPI tests so far.  We would like to encourage users to&lt;br /&gt;
try DMTCP out if their non-MPI jobs run longer than 24 hours.  If you want to try this, please contact us first since we are still&lt;br /&gt;
experimenting with DMTCP.&lt;br /&gt;
&lt;br /&gt;
The sample job submission script below shows how dmtcp_launch is used to start the application, then dmtcp_restart is used to start from a checkpoint if the job has failed and been rescheduled.&lt;br /&gt;
If you are putting this in an array script, then add the Slurm array task ID to the end of the ckeckpoint directory name&lt;br /&gt;
like &amp;lt;B&amp;gt;ckptdir=ckpt-$SLURM_ARRAY_TASK_ID&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --mem=50G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS/2016.4-foss-2017beocatb-hybrid&lt;br /&gt;
  module load DMTCP&lt;br /&gt;
  module list&lt;br /&gt;
  &lt;br /&gt;
  ckptdir=ckpt&lt;br /&gt;
  mkdir -p $ckptdir&lt;br /&gt;
  export DMTCP_CHECKPOINT_DIR=$ckptdir&lt;br /&gt;
  &lt;br /&gt;
  if ! ls -1 $ckptdir | grep -c dmtcp_restart_script &amp;gt; /dev/null&lt;br /&gt;
  then&lt;br /&gt;
     echo &amp;quot;Using dmtcp_launch to start the app the first time&amp;quot;&lt;br /&gt;
     dmtcp_launch --no-coordinator mpirun -np 1 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -nsteps 50000 -ntomp 4 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  else&lt;br /&gt;
     echo &amp;quot;Using dmtcp_restart from $ckptdir to continue from a checkpoint&amp;quot;&lt;br /&gt;
     dmtcp_restart $ckptdir/*.dmtcp&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
You will need to run several tests to verify that DMTCP is working properly with your application.&lt;br /&gt;
First, run a short test without DMTCP and another with DMTCP with the checkpoint interval set to 5 minutes&lt;br /&gt;
by adding the line &amp;lt;B&amp;gt;export DMTCP_CHECKPOINT_INTERVAL=300&amp;lt;/B&amp;gt; to your script.  Then use &amp;lt;B&amp;gt;kstat -d 1&amp;lt;/B&amp;gt; to&lt;br /&gt;
check that the memory in both runs is close to the same.  Also use this information to calculate the time &lt;br /&gt;
that each checkpoint takes.  In most cases I've seen times less than a minute for checkpointing that will normally&lt;br /&gt;
be done once each hour.  If your application is taking more time, let us know.  Sometimes this can be sped up&lt;br /&gt;
by simply turning off compression by adding the line &amp;lt;B&amp;gt;export DMTCP_GZIP=0&amp;lt;/B&amp;gt;.  Make sure to remove the&lt;br /&gt;
line where you set the checkpoint interval to 300 seconds so that the default time of once per hour will be used.&lt;br /&gt;
&lt;br /&gt;
After verifying that your code completes using DMTCP and does not take significantly more time or memory, you&lt;br /&gt;
will need to start a run then &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; it after the first checkpoint, then resubmit the same script to make &lt;br /&gt;
sure that it restarts and runs to completion.  If you are working with an array job script, the last is to try a few&lt;br /&gt;
array tasks at once to make sure there is no conflict between the jobs.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;--gres=killable:1&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;scancel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, the group gets access to a &amp;quot;partition&amp;quot; giving you priority on those nodes.&lt;br /&gt;
&lt;br /&gt;
In most situations, the scheduler will automatically add those priority partitions to the jobs as submitted. You should not need to include a partition list in your job submission.&lt;br /&gt;
&lt;br /&gt;
There are currently just a few exceptions that we will not automatically add:&lt;br /&gt;
* ksu-chem-mri.q&lt;br /&gt;
* ksu-gen-gpu.q&lt;br /&gt;
* ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
If you have access to those any of the non-automatic partitions, and have need of the resources in that partition, you can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
Otherwise, you shouldn't modify the partition line at all unless you really know what you're doing.&lt;br /&gt;
&lt;br /&gt;
== Graphical Applications ==&lt;br /&gt;
Some applications are graphical and need to have some graphical input/output. We currently accomplish this with X11 forwarding or [[OpenOnDemand]]&lt;br /&gt;
=== OpenOnDemand ===&lt;br /&gt;
[[OpenOnDemand]] is likely the easier and more performant way to run a graphical application on the cluster.&lt;br /&gt;
# visit [https://ondemand.beocat.ksu.edu/ ondemand] and login with your cluster credentials.&lt;br /&gt;
# Check the &amp;quot;Interactive Apps&amp;quot; dropdown. We may have a workflow ready for you. If not choose the desktop.&lt;br /&gt;
# Select the resources you need&lt;br /&gt;
# Select launch&lt;br /&gt;
# A job is now submitted to the cluster and once the job is started you'll see a Connect button&lt;br /&gt;
# use the app as needed. If using the desktop, start your graphical application.&lt;br /&gt;
&lt;br /&gt;
=== X11 Forwarding ===&lt;br /&gt;
==== Connecting with an X11 client ====&lt;br /&gt;
===== Windows =====&lt;br /&gt;
If you are running Windows, we recommend MobaXTerm as your file/ssh manager, this is because it is one relatively simple tool to do everything. MobaXTerm also automatically connects with X11 forwarding enabled.&lt;br /&gt;
===== Linux/OSX =====&lt;br /&gt;
Both Linux and OSX can connect in an X11 forwarding mode. Linux will have all of the tools you need installed already, OSX will need [https://www.xquartz.org/ XQuartz] installed.&lt;br /&gt;
&lt;br /&gt;
Then you will need to change your 'ssh' command slightly:&lt;br /&gt;
&lt;br /&gt;
 ssh -Y eid@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
The '''-Y''' argument tells ssh to setup X11 forwarding.&lt;br /&gt;
==== Starting an Graphical job ====&lt;br /&gt;
All graphical jobs, by design, must be interactive, so we'll use the srun command. On a headnode, we run the following:&lt;br /&gt;
 # load an X11 enabled application&lt;br /&gt;
 module load Octave&lt;br /&gt;
 # start an X11 job, sbatch arguments are accepted for srun as well, 1 node, 1 hour, 1 gb of memory&lt;br /&gt;
 srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 octave --gui&lt;br /&gt;
&lt;br /&gt;
Because these jobs are interactive, they may not be able to run at all times, depending on how busy the scheduler is at any point in time. '''--pty --x11''' are required arguments setting up the job, and '''octave --gui''' is the command to run inside the job.&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=1141</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=1141"/>
		<updated>2025-06-26T13:48:52Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Common Storage For Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If your account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
===== OpenSSH =====&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you would like to put this in your ~/.ssh/config file, it will send the environment variable whenever it is set to Beocat upon connection:&lt;br /&gt;
 Host headnode.beocat.ksu.edu&lt;br /&gt;
     HostName headnode.beocat.ksu.edu&lt;br /&gt;
     User YOUR_EID_GOES_HERE&lt;br /&gt;
     SendEnv DUO_PASSCODE&lt;br /&gt;
&lt;br /&gt;
From there you would simply do the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
export DUO_PASSCODE=push&lt;br /&gt;
ssh headnode.beocat.ksu.edu&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== PuTTY =====&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
===== MobaXTerm =====&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has excessive prompts for managing files.&lt;br /&gt;
: Filezilla opens one connection for browsing the system. Transferring files opens 1-4 additional connections when the transfers start. Once they finish, those connections disconnect. If you start additional transfers, new connections will be opened. Every one of those connections must be approved through Duo MFA on your smart device. You can adjust the number of connections that FileZilla opens for transfers if you like. &amp;lt;tt&amp;gt;File -&amp;gt; Site Manager -&amp;gt; (choose the site you're changing) -&amp;gt; Transfer Settings -&amp;gt; Limit number of simultaneous connections&amp;lt;/tt&amp;gt;.&lt;br /&gt;
: Another option is to disable processing the transfer queue, add the things to it you want to transfer and then re-enable the transfer queue. Then at least it will re-use the connections until the queue is empty.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /homes or /bulk, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Help! when logging into OnDemand I get a '400 Bad request' message ==&lt;br /&gt;
Unfortunately, there are some known issues with OnDemand and how it handles some of the complexities behind the scenes. This involves browser cookies that (occasionally) get too large and make it so you get these messages upon login.&lt;br /&gt;
&lt;br /&gt;
The only work around is to clear your browser cookies (although you can limit it to simply clearing the ksu.edu ones).&lt;br /&gt;
&lt;br /&gt;
Details for specific browsers are below&lt;br /&gt;
&lt;br /&gt;
* [https://support.mozilla.org/en-US/kb/clear-cookies-and-site-data-firefox Firefox]&lt;br /&gt;
* [https://support.microsoft.com/en-us/microsoft-edge/delete-cookies-in-microsoft-edge-63947406-40ac-c3b8-57b9-2a946a29ae09 Edge]&lt;br /&gt;
* [https://support.google.com/chrome/answer/95647?sjid=1537101898131489753-NA#zippy=%2Cdelete-cookies-from-a-site Chrome]&lt;br /&gt;
* [https://support.apple.com/guide/safari/manage-cookies-sfri11471/mac Safari]&lt;br /&gt;
* If you are using some other browser, I would recommend searching google for &amp;lt;tt&amp;gt;$browsername clear site cookies&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request to Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands. As with other permissions, the individuals will need access through every level of the directory hierarchy. [[LinuxBasics#Access_Control_Lists|It may be best to review our more in-depth topic on Access Control Lists.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=SlurmBasics&amp;diff=1140</id>
		<title>SlurmBasics</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=SlurmBasics&amp;diff=1140"/>
		<updated>2025-06-26T13:48:17Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* The Rocky/Slurm nodes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== The Rocky/Slurm nodes ==&lt;br /&gt;
&lt;br /&gt;
We have converted Beocat from CentOS Linux to Rocky Linux on April 1st of 2024.  Any applications or libraries from the old system must be recompiled.   &lt;br /&gt;
&lt;br /&gt;
=== Using Modules ===&lt;br /&gt;
&lt;br /&gt;
If you're using a common code that others may also be using, we may already have it compiled in a module.  You can list the modules available and load an application as in the example below for Vasp.&lt;br /&gt;
&lt;br /&gt;
 eos&amp;gt;  &amp;lt;B&amp;gt;module avail&amp;lt;/B&amp;gt;&lt;br /&gt;
 eos&amp;gt;  &amp;lt;B&amp;gt;module load GROMACS&amp;lt;/B&amp;gt;&lt;br /&gt;
 eos&amp;gt;  &amp;lt;B&amp;gt;module list&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When a module gets loaded, all the necessary libraries are also loaded and the paths to the libraries and executables are automatically set up.  Loading GROMACS for example also loads the OpenMPI library needed to run it and adds the path to the MPI commands and Grimaces executables.   To see how the path is set up, try executing &amp;lt;B&amp;gt;&amp;lt;I&amp;gt;which gmx&amp;lt;/I&amp;gt;&amp;lt;/B&amp;gt;.  The module system allows you to easily switch between different version of applications, libraries, or languages as well.&lt;br /&gt;
&lt;br /&gt;
If you are using a custom code or one that is not installed in a module, you'll need to recompile it yourself.  This process is easier under CentOS as some of the work just involves loading the necessary set of modules.  The first step is to decide whether to use the Intel compiler toolchain or the GNU toolchain, each of which includes the compilers and other math libraries.  The module commands for each are below, and you can load these automatically when you log in by adding one of these module load statements to your .bashrc file.  See &amp;lt;B&amp;gt;/homes/daveturner/.bashrc&amp;lt;/B&amp;gt; as an example, where I put the module load statements .&lt;br /&gt;
&lt;br /&gt;
To load the Intel compiler tool chain including the Intel Math Kernel Library (and OpenMPI):&lt;br /&gt;
 icr-helios&amp;gt;  &amp;lt;B&amp;gt;module load iomkl&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load the GNU compiler tool chain including OpenMPI, OpenBLAS, FFTW, and ScalaPack load foss (free open source software):&lt;br /&gt;
 icr-helios&amp;gt;  &amp;lt;B&amp;gt;module load foss&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modules provide an easy way to set up the compilers and libraries you may need to compile your code.  Beyond that there are many different ways to compile codes so you'll just need to follow the directions.  If you need help you can always contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket].&lt;br /&gt;
&lt;br /&gt;
=== Submitting jobs to Slurm ===&lt;br /&gt;
&lt;br /&gt;
You can submit your job script using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
 icr-helios&amp;gt; &amp;lt;B&amp;gt;sbatch sbatch_script.sh&amp;lt;/B&amp;gt;&lt;br /&gt;
 icr-helios&amp;gt; &amp;lt;B&amp;gt;kstat  --me&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will submit the script and show you a list of your jobs that are running and the jobs you have in the queue.  By default the output for each job will go into a &amp;lt;B&amp;gt;slurm-###.out&amp;lt;/B&amp;gt; file where ### is the job ID number.  If you need to kill a job, you can use the &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; command with the job ID number.&lt;br /&gt;
&lt;br /&gt;
== Submitting your first job ==&lt;br /&gt;
To submit a job to run under Slurm, we use the &amp;lt;B&amp;gt;&amp;lt;I&amp;gt;sbatch&amp;lt;/I&amp;gt;&amp;lt;/B&amp;gt; (submit batch) command.  The scheduler finds the optimum place for your job to run. With over 300 nodes and 7500 cores to schedule, as well as differing priorities, hardware, and individual resources, the scheduler's job is not trivial and it can take some time for a job to start even when there are empty nodes available.&lt;br /&gt;
&lt;br /&gt;
There are a few things you'll need to know before running sbatch.&lt;br /&gt;
* How many cores you need. Note that unless your program is created to use multiple cores (called &amp;quot;threading&amp;quot;), asking for more cores will not speed up your job. This is a common misperception. '''Beocat will not magically make your program use multiple cores!''' For this reason the default is 1 core.&lt;br /&gt;
* How much time you need. Many users when beginning to use Beocat neglect to specify a time requirement. The default is one hour, and we get asked why their job died after one hour. We usually point them to the [[FAQ]].&lt;br /&gt;
* How much memory you need. The default is 1 GB. If your job uses significantly more than you ask, your job will be killed off.&lt;br /&gt;
* Any advanced options. See the [[AdvancedSlurm]] page for these requests. For our basic examples here, we will ignore these.&lt;br /&gt;
&lt;br /&gt;
So let's now create a small script to test our ability to submit jobs. Create the following file (either by copying it to Beocat or by editing a text file and we'll name it &amp;lt;code&amp;gt;myhost.sh&amp;lt;/code&amp;gt;. Both of these methods are documented on our [[LinuxBasics]] page.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/sh&lt;br /&gt;
hostname&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Be sure to make it executable&lt;br /&gt;
 chmod u+x myhost.sh&lt;br /&gt;
&lt;br /&gt;
So, now lets submit it as a job and see what happens. Here I'm going to use five options&lt;br /&gt;
* &amp;lt;code&amp;gt;--mem-per-cpu=&amp;lt;/code&amp;gt; tells how much memory I need. In my example, I'm using our system minimum of 512 MB, which is more than enough. Note that your memory request is '''per core''', which doesn't make much difference for this example, but will as you submit more complex jobs.&lt;br /&gt;
* &amp;lt;code&amp;gt;--time=&amp;lt;/code&amp;gt; tells how much runtime I need. This can be in the form of &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. This is a very short job, so 1 minute should be plenty. This can't be changed after the job is started please make sure you have requested a sufficient amount of time.&lt;br /&gt;
* &amp;lt;code&amp;gt;--nodes=1&amp;lt;/code&amp;gt; tells Slurm that this must be run on one machine. The [[AdvancedSlurm]] page has much more on the &amp;quot;nodes&amp;quot; switch.&lt;br /&gt;
* &amp;lt;code&amp;gt;--ntasks-per-node=16 &amp;lt;/code&amp;gt; Request 16 cores on each node.&lt;br /&gt;
* &amp;lt;code&amp;gt;--constraint=moles&amp;lt;/code&amp;gt; Request to only run on the Mole class of compute nodes.&lt;br /&gt;
&lt;br /&gt;
 % '''ls'''&lt;br /&gt;
 myhost.sh&lt;br /&gt;
 % '''sbatch --time=1 --mem-per-cpu=512M --cpus-per-task=1 --ntasks=1 --nodes=1 ./myhost.sh'''&lt;br /&gt;
 salloc: Granted job allocation 1483446&lt;br /&gt;
&lt;br /&gt;
Since this is such a small job, it is likely to be scheduled almost immediately, so a minute or so later, I now see&lt;br /&gt;
 % '''ls'''&lt;br /&gt;
 myhost.sh&lt;br /&gt;
 slurm-1483446.out&lt;br /&gt;
&lt;br /&gt;
 % '''cat slurm-1483446.out'''&lt;br /&gt;
 mage03&lt;br /&gt;
&lt;br /&gt;
== Monitoring Your Job ==&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;B&amp;gt;kstat&amp;lt;/B&amp;gt; perl script has been developed at K-State to provide you with all the available information about your jobs on Beocat.  &amp;lt;B&amp;gt;kstat --help&amp;lt;/B&amp;gt; will give you a full description of how to use it.&lt;br /&gt;
&lt;br /&gt;
 Eos&amp;gt;  kstat --help&lt;br /&gt;
  &lt;br /&gt;
  USAGE: kstat [-q] [-c] [-g] [-l] [-u user] [-p NaMD] [-j 1234567] [--part partition]&lt;br /&gt;
         kstat alone dumps all info except for the core summaries&lt;br /&gt;
         choose -q -c for only specific info on queued or core summaries.&lt;br /&gt;
         then specify any searchables for the user, program name, or job id&lt;br /&gt;
  &lt;br /&gt;
  kstat                 info on running and queued jobs&lt;br /&gt;
  kstat -h              list host info only, no jobs&lt;br /&gt;
  kstat -q              info on the queued jobs only&lt;br /&gt;
  kstat -c              core usage for each user&lt;br /&gt;
  kstat -d #            show jobs run in the last # days&lt;br /&gt;
                        Memory per node - used/allocated/requested&lt;br /&gt;
                        Red is close to or over requested amount&lt;br /&gt;
                        Yellow is under utilized for large jobs&lt;br /&gt;
  kstat -g              Only show GPU nodes&lt;br /&gt;
  kstat -o Turner       Only show info for a given owner&lt;br /&gt;
  kstat -o CS_HPC          Same but sub _ for spaces&lt;br /&gt;
  kstat -l              long list - node features and performance&lt;br /&gt;
                        Node hardware and node CPU usage&lt;br /&gt;
                        job nodelist and switchlist&lt;br /&gt;
                        job current and max memory&lt;br /&gt;
                        job CPU utilizations&lt;br /&gt;
  kstat -u daveturner   job info for one user only&lt;br /&gt;
  kstat --me            job info for my jobs only&lt;br /&gt;
  kstat -j 1234567      info on a given job id&lt;br /&gt;
  kstat --osg           show OSG background jobs also&lt;br /&gt;
  kstat --nocolor       do not use any color&lt;br /&gt;
  kstat --name          display full names instead of eIDs&lt;br /&gt;
  &lt;br /&gt;
  ---------------- Graphs and Tables ---------------------------------------&lt;br /&gt;
  Specify graph/table,  CPU or GPU or host, usage or memory, and optional time&lt;br /&gt;
  kstat --graph-cpu-memory #      gnuplot CPU memory for job #&lt;br /&gt;
  kstat --table-gpu-usage-5min #  GPU usage table every 5 min for job #&lt;br /&gt;
  kstat --table-cpu-60min #       CPU usage, memory, swap table every 60 min for job #&lt;br /&gt;
  kstat --table-node [nodename]   cores, load, CPU usage, memory table for a node&lt;br /&gt;
  &lt;br /&gt;
  --------------------------------------------------------------------------&lt;br /&gt;
    Multi-node jobs are highlighted in Magenta&lt;br /&gt;
       kstat -l also provides a node list and switch list&lt;br /&gt;
       highlighted in Yellow when nodes are spread across multiple switches&lt;br /&gt;
    Run time is colorized yellow then red for jobs nearing their time limit&lt;br /&gt;
    Queue time is colorized yellow then red for jobs waiting longer times&lt;br /&gt;
  --------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
kstat can be used to give you a summary of your jobs that are running and in the queue:&lt;br /&gt;
 &amp;lt;B&amp;gt;Eos&amp;gt;  kstat --me&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;&lt;br /&gt;
&amp;lt;font color=Brown&amp;gt;Hero43 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=Blue&amp;gt;24 of 24 cores &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;Load 23.4 / 24 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=Red&amp;gt;495.3 / 512 GB used&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;font color=lightgreen&amp;gt;daveturner &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;unafold &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 1234567 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=cyan&amp;gt;1 core &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;running &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 4gb req &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 0 d  5 h 35 m &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;daveturner &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;octopus &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 1234568 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=cyan&amp;gt;16 core &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;running &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt; 128gb req &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 8 d 15 h 42 m &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=green&amp;gt; ##################################   BeoCat Queue    ################################### &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;daveturner &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;NetPIPE &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 1234569 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=cyan&amp;gt;2 core &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt; PD &amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 2h &amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 4gb req &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 0 d 1 h 2 m &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;kstat&amp;lt;/b&amp;gt; produces a separate line for each host.  Use &amp;lt;b&amp;gt;kstat -h&amp;lt;/b&amp;gt; to see information on all hosts without the jobs.&lt;br /&gt;
For the example above we are listing our jobs and the hosts they are on.&lt;br /&gt;
&lt;br /&gt;
Core usage - yellow for empty, red for empty on owned nodes, cyan for partially used, blue for all cores used.&amp;lt;BR&amp;gt;&lt;br /&gt;
Load level - yellow or yellow background indicates the node is being inefficiently used.  Red just means more threads than cores.&amp;lt;br&amp;gt;&lt;br /&gt;
Memory usage - yellow or red means most memory is used.&amp;lt;BR&amp;gt;&lt;br /&gt;
If the node is owned the group name will be in orange on the right.  Killable jobs can still be run on those nodes.&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each job line will contain the username, program name, job ID, number of cores, the status which may be colored red for killable jobs, &lt;br /&gt;
the maximum memory used or memory requested, and the amount of time the job has run.  &lt;br /&gt;
Jobs in the queue may contain information on the requested memory and run time, priority access, constraints, and&lt;br /&gt;
how long the job has been in the queue.&lt;br /&gt;
In this case, I have 2 jobs running on Hero43.  &amp;lt;i&amp;gt;unafold&amp;lt;/i&amp;gt; is using 1 core while &amp;lt;i&amp;gt;octopus&amp;lt;/i&amp;gt; is using 16 cores.  Slurm did not provide&lt;br /&gt;
any information on the actual memory use so the memory request is reported  &lt;br /&gt;
&lt;br /&gt;
=== Detailed information about a single job ===&lt;br /&gt;
&lt;br /&gt;
kstat can provide a great deal of information on a particular job including a very rough estimate of when it will run.  This time is a worst case scenario as this will&lt;br /&gt;
be adapted as other jobs finish early.  This is a good way to check for job submission problems before contacting us.  kstat colorizes the more important&lt;br /&gt;
information to make it easier to identify.&lt;br /&gt;
&lt;br /&gt;
 Eos&amp;gt;  kstat -j 157054&lt;br /&gt;
 &lt;br /&gt;
 ##################################   Beocat Queue    ###################################&lt;br /&gt;
  daveturner  netpipe     157054   64 cores  PD       dwarves fabric  CS HPC     8gb req   0 d  0 h  0 m&lt;br /&gt;
 &lt;br /&gt;
 JobId 157054  Job Name  netpipe&lt;br /&gt;
   UserId=daveturner GroupId=daveturner_users(2117) MCS_label=N/A&lt;br /&gt;
   Priority=11112 Nice=0 Account=ksu-cis-hpc QOS=normal&lt;br /&gt;
   Status=PENDING Reason=Resources Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0&lt;br /&gt;
   RunTime=00:00:00 TimeLimit=00:40:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2018-02-02T18:18:31 EligibleTime=2018-02-02T18:18:31&lt;br /&gt;
   Estimated Start Time is 2018-02-03T06:17:49 EndTime=2018-02-03T06:57:49 Deadline=N/A&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partitions killable.q,ksu-cis-hpc.q AllocNode:Sid=eos:1761&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=(null) SchedNodeList=dwarf[01-02]&lt;br /&gt;
   NumNodes=2-2 NumCPUs=64 NumTasks=64 CPUs/Task=1 ReqB:S:C:T=0:0:*:*&lt;br /&gt;
   TRES 2 nodes 64 cores 8192  mem gres/fabric 2&lt;br /&gt;
   Socks/Node=* NtasksPerN:B:S:C=32:0:*:* CoreSpec=*&lt;br /&gt;
   MinCPUsNode=32 MinMemoryNode=4G MinTmpDiskNode=0&lt;br /&gt;
   Constraint=dwarves DelayBoot=00:00:00&lt;br /&gt;
   Gres=fabric Reservation=(null)&lt;br /&gt;
   OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Slurm script  /homes/daveturner/perf/NetPIPE-5.x/sb.np&lt;br /&gt;
   WorkDir=/homes/daveturner/perf/NetPIPE-5.x&lt;br /&gt;
   StdErr=/homes/daveturner/perf/NetPIPE-5.x/0.o157054&lt;br /&gt;
   StdIn=/dev/null&lt;br /&gt;
   StdOut=/homes/daveturner/perf/NetPIPE-5.x/0.o157054&lt;br /&gt;
   Switches=1@00:05:00&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=netpipe&lt;br /&gt;
#SBATCH -o 0.o%j&lt;br /&gt;
#SBATCH --time=0:40:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --switches=1&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
#SBATCH --constraint=dwarves&lt;br /&gt;
#SBATCH --ntasks-per-node=32&lt;br /&gt;
#SBATCH --gres=fabric:roce:1&lt;br /&gt;
&lt;br /&gt;
host=`echo $SLURM_JOB_NODELIST | sed s/[^a-z0-9]/\ /g | cut -f 1 -d ' '`&lt;br /&gt;
nprocs=$SLURM_NTASKS&lt;br /&gt;
openmpi_hostfile.pl $SLURM_JOB_NODELIST 1 hf.$host&lt;br /&gt;
opts=&amp;quot;--printhostnames --quick --pert 3&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;*******************************************************************&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on $SLURM_NNODES nodes $nprocs cores on nodes $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;*******************************************************************&amp;quot;&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 --hostfile hf.$host NPmpi $opts -o np.${host}.mpi&lt;br /&gt;
mpirun -np 2 --hostfile hf.$host NPmpi $opts -o np.${host}.mpi.bi --async --bidir&lt;br /&gt;
mpirun -np $nprocs NPmpi $opts -o np.${host}.mpi$nprocs --async --bidir&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Completed jobs and memory usage ===&lt;br /&gt;
&lt;br /&gt;
 kstat -d #&lt;br /&gt;
&lt;br /&gt;
This will provide information on the jobs you have currently running and those that have completed&lt;br /&gt;
in the last '#' days.  This is currently the only reliable way to get the memory used per node for your job.&lt;br /&gt;
This also provides information on whether the job completed normally, was canceled with &amp;lt;I&amp;gt;scancel&amp;lt;/I&amp;gt;, &lt;br /&gt;
timed out, or was killed because it exceeded its memory request.&lt;br /&gt;
&lt;br /&gt;
 Eos&amp;gt;  kstat -d 10&lt;br /&gt;
&lt;br /&gt;
 ###########################  sacct -u daveturner  for 10 days  ###########################&lt;br /&gt;
                                      max gb used on a node /   gb requested per node&lt;br /&gt;
  193037   ADF         dwarf43           1 n  32 c   30.46gb/100gb    05:15:34  COMPLETED&lt;br /&gt;
  193289   ADF         dwarf33           1 n  32 c   26.42gb/100gb    00:50:43  CANCELLED&lt;br /&gt;
  195171   ADF         dwarf44           1 n  32 c   56.81gb/120gb    14:43:35  COMPLETED&lt;br /&gt;
  209518   matlab      dwarf36           1 n   1 c    0.00gb/  4gb    00:00:02  FAILED&lt;br /&gt;
&lt;br /&gt;
=== Summary of core usage ===&lt;br /&gt;
&lt;br /&gt;
kstat can also provide a listing of the core usage and cores requested for each user.&lt;br /&gt;
 Eos&amp;gt;  kstat -c&lt;br /&gt;
 &lt;br /&gt;
 ##############################   Core usage    ###############################&lt;br /&gt;
   antariksh       1512 cores   %25.1 used     41528 cores queued&lt;br /&gt;
   bahadori         432 cores   % 7.2 used        80 cores queued&lt;br /&gt;
   eegoetz            0 cores   % 0.0 used         2 cores queued&lt;br /&gt;
   fahrialkan        24 cores   % 0.4 used        32 cores queued&lt;br /&gt;
   gowri             66 cores   % 1.1 used        32 cores queued&lt;br /&gt;
   jeffcomer        160 cores   % 2.7 used         0 cores queued&lt;br /&gt;
   ldcoates12        80 cores   % 1.3 used       112 cores queued&lt;br /&gt;
   lukesteg         464 cores   % 7.7 used         0 cores queued&lt;br /&gt;
   mike5454        1060 cores   %17.6 used       852 cores queued&lt;br /&gt;
   nilusha          344 cores   % 5.7 used         0 cores queued&lt;br /&gt;
   nnshan2014       136 cores   % 2.3 used         0 cores queued&lt;br /&gt;
   ploetz           264 cores   % 4.4 used        60 cores queued&lt;br /&gt;
   sadish           812 cores   %13.5 used         0 cores queued&lt;br /&gt;
   sandung           72 cores   % 1.2 used        56 cores queued&lt;br /&gt;
   zhiguang          80 cores   % 1.3 used       688 cores queued&lt;br /&gt;
&lt;br /&gt;
=== Producing memory and CPU utilization tables and graphs ===&lt;br /&gt;
&lt;br /&gt;
kstat can now produce tables or graphs for the memory or CPU utilization&lt;br /&gt;
for a job.  In order to view graphs you must set up X11 forwarding on your&lt;br /&gt;
ssh connection by using the -X parameter.&lt;br /&gt;
&lt;br /&gt;
If you want to read more, continue on to our [[AdvancedSlurm]] page.&lt;br /&gt;
&lt;br /&gt;
=== kstat is now available to download and install on other clusters ===&lt;br /&gt;
&lt;br /&gt;
https://gitlab.beocat.ksu.edu/Admin-Public/kstat&lt;br /&gt;
&lt;br /&gt;
This software has been installed and used on several clusters for many years.&lt;br /&gt;
It should be considered Beta software and may take some slight modifications&lt;br /&gt;
to install on some clusters.  Please contact the author if you want to give&lt;br /&gt;
it a try (daveturner@ksu.edu).&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=1139</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=1139"/>
		<updated>2025-06-26T13:47:39Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Manually configuring your ACLs */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. 'kstat -g' will show you the GPU nodes and the jobs running on them.  To request a GPU node, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; for example to request 1 GPU for your job; if your job uses multiple nodes, the number of GPUs requested is per-node.  You can also request a given type of GPU (kstat -g -l to show types) by using &amp;lt;tt&amp;gt;--gres=gpu:geforce_gtx_1080_ti:1&amp;lt;/tt&amp;gt; for a 1080Ti GPU on the Wizards or Dwarves, &amp;lt;tt&amp;gt;--gres=gpu:quadro_gp100:1&amp;lt;/tt&amp;gt; for the P100 GPUs on Wizard20-21 that are best for 64-bit codes like Vasp.  Most of these GPU nodes are owned by various groups.  If you want access to GPU nodes and your group does not own any, we can add you to the &amp;lt;tt&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/tt&amp;gt; group that has priority on Dwarf36-39.  For more information on compiling CUDA code click on this [[CUDA]] link.&lt;br /&gt;
&lt;br /&gt;
A listing of the current types of gpus can be gathered with this command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
scontrol show nodes | grep CfgTRES | tr ',' '\n' | awk -F '[:=]' '/gres\/gpu:/ { print $2 }' | sort -u&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
At the time of this writing, that command produces this list:&lt;br /&gt;
* geforce_gtx_1080_ti&lt;br /&gt;
* geforce_rtx_2080_ti&lt;br /&gt;
* geforce_rtx_3090&lt;br /&gt;
* l40s&lt;br /&gt;
* quadro_gp100&lt;br /&gt;
* rtx_a4000&lt;br /&gt;
* rtx_a6000&lt;br /&gt;
&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
''Intra''node jobs run on many cores in the same node. These jobs can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or any programming language that has the concept of ''threads''. Often, your program will need to know how many cores you want it to use, and many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--nodes=1 --cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
''Inter''node jobs can utilize many cores on one or more nodes. Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--nodes=''n'' --ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=10 --ntasks=100&amp;lt;/tt&amp;gt; will give you a total of 100 cores across 10 nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=avx2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## There is one strict rule for guaranteeing Slurm reads all of your options:&lt;br /&gt;
## Do not put *any* lines above your resource requests that aren't either:&lt;br /&gt;
##    1) blank. (no other characters)&lt;br /&gt;
##    2) comments (lines must begin with '#')&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres=gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from contacting Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]&lt;br /&gt;
## to see how we can assist in getting your &lt;br /&gt;
## job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance.  Those needing more storage may purchase /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up. The /fastscratch file system is a zfs host with lots of NVME drives provide much faster&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to /fastscratch, the local disk on each node, or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in a purchased /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Due to the cost, directories will be provided when we are contacted and provided with payment information.&lt;br /&gt;
&lt;br /&gt;
===Fast Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /fastscratch file system is faster than /bulk or /homes.&lt;br /&gt;
In order to use fastscratch, you first need to make a directory for yourself.  &lt;br /&gt;
Fast Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the fastscratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /fastscratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  Use 'kstat -l -h' to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&lt;br /&gt;
# Once the move is complete, the Supervisor should limit the permissions for the directory again by removing the student's access:&lt;br /&gt;
chown $USER: -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
In the past, Beocat users have been allowed to keep their&lt;br /&gt;
/homes and /bulk directories open so that any other user could&lt;br /&gt;
access files.  In order to bring Beocat into alignment with&lt;br /&gt;
State of Kansas regulations and industry norms, all users must now have their /homes /bulk /scratch and /fastscratch directories&lt;br /&gt;
locked down from other users, but can still share files and directories within their group or with individual users&lt;br /&gt;
using group and individual ACLs (Access Control Lists) which will be explained below.&lt;br /&gt;
Beocat staff will be exempted from this&lt;br /&gt;
policy as we need to work freely with all users and will manage our&lt;br /&gt;
subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory with the setacls script===&lt;br /&gt;
&lt;br /&gt;
If you do not wish to share files or directories with other users, you do not need to do anything&lt;br /&gt;
as rwx access to others has already been removed.&lt;br /&gt;
If you want to share files or directories you can either use the '''setacls''' script or configure&lt;br /&gt;
the ACLs (Access Control Lists) manually.&lt;br /&gt;
&lt;br /&gt;
The '''setacls -h''' will show how to use the script.&lt;br /&gt;
  &lt;br /&gt;
  Eos: setacls -h&lt;br /&gt;
  setacls [-r] [-w] [-g group] [-u user] -d /full/path/to/directory&lt;br /&gt;
  Execute pemission will always be applied, you may also choose r or w&lt;br /&gt;
  Must specify at least one group or user&lt;br /&gt;
  Must specify at least one directory, and it must be the full path&lt;br /&gt;
  Example: setacls -r -g ksu-cis-hpc -u mozes -d /homes/daveturner/shared_dir&lt;br /&gt;
&lt;br /&gt;
You can specify the permissions to be either -r for read or -w for write or you can specify both.&lt;br /&gt;
You can provide a priority group to share with, which is the same as the group used in a --partition=&lt;br /&gt;
statement in a job submission script.  You can also specify users.&lt;br /&gt;
You can specify a file or a directory to share.  If the directory is specified then all files in that&lt;br /&gt;
directory will also be shared, and all files created in the directory laster will also be shared.&lt;br /&gt;
&lt;br /&gt;
The script will set everything up for you, telling you the commands it is executing along the way,&lt;br /&gt;
then show the resulting ACLs at the end with the '''getfacl''' command.  Below is an example of &lt;br /&gt;
sharing the directory '''test_directory''' in my /bulk/daveturner directory with Nathan.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
Beocat&amp;gt;  cd /bulk/daveturner&lt;br /&gt;
Beocat&amp;gt;  mkdir test_directory&lt;br /&gt;
Beocat&amp;gt;  setacls -r -w -u nathanrwells -d /bulk/daveturner/test_directory&lt;br /&gt;
&lt;br /&gt;
Opening up base directory /bulk/daveturner with X execute permission only&lt;br /&gt;
  setfacl -m u:nathanrwells:X /bulk/daveturner&lt;br /&gt;
&lt;br /&gt;
Setting Xrw for directory/file /bulk/daveturner/test_directory&lt;br /&gt;
  setfacl -m u:nathanrwells:Xrw -R /bulk/daveturner/test_directory&lt;br /&gt;
  setfacl -d -m u:nathanrwells:Xrw -R /bulk/daveturner/test_directory&lt;br /&gt;
&lt;br /&gt;
The ACLs on directory /bulk/daveturner/test_directory are set to:&lt;br /&gt;
&lt;br /&gt;
getfacl: Removing leading '/' from absolute path names&lt;br /&gt;
# file: bulk/daveturner/test_directory&lt;br /&gt;
USER   daveturner        rwx  rwx&lt;br /&gt;
user   nathanrwells      rwx  rwx&lt;br /&gt;
GROUP  daveturner_users  r-x  r-x&lt;br /&gt;
group  beocat_support    r-x  r-x&lt;br /&gt;
mask                     rwx  rwx&lt;br /&gt;
other                    ---  ---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The '''getfacl''' run by the script now shows that user '''nathanrwells''' has &lt;br /&gt;
read and write permissions to that directory and execute access to all directories&lt;br /&gt;
leading up to it.&lt;br /&gt;
&lt;br /&gt;
====Manually configuring your ACLs====&lt;br /&gt;
&lt;br /&gt;
If you want to manually configure the ACLs you can use the directions below to do what the '''setacls''' &lt;br /&gt;
script would do for you.&lt;br /&gt;
You first need to provide the minimum execute access to your /homes&lt;br /&gt;
or /bulk directory before sharing individual subdirectories.  Setting the ACL to execute only will allow those &lt;br /&gt;
in your group to get access to subdirectories while not including read access will mean they will not&lt;br /&gt;
be able to see other files or subdirectories on your main directory, but do keep in mind that they can still access them&lt;br /&gt;
so you may want to still lock them down manually.  Below is an example of how I would change my&lt;br /&gt;
/homes/daveturner directory to allow ksu-cis-hpc group execute access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:X /homes/daveturner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your research group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share_hpc', &lt;br /&gt;
then providing access to my ksu-cis.hpc group&lt;br /&gt;
(my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q).&lt;br /&gt;
Using -R will make these changes recursively to all files and directories in that subdirectory while changing the defaults with the setfacl -d command will ensure that files and directories created&lt;br /&gt;
later will be done so with these same ACLs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc&lt;br /&gt;
# ACLs are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share_hpc' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:ksu-cis-hpc:rwX -R share_hpc'&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
groups&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself&lt;br /&gt;
by emailing us at&lt;br /&gt;
 or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] or emailing us at beocat@cs.ksu.edu&lt;br /&gt;
.&lt;br /&gt;
If you want to share a directory with only a few people you can manage your ACLs using individual usernames&lt;br /&gt;
instead of with a group.&lt;br /&gt;
&lt;br /&gt;
You can use the '''getfacl''' command to see groups have access to a given directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
getfacl share_hpc&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ACLs give you great flexibility in controlling file access at the&lt;br /&gt;
group level.  Below is a more advanced example where I set up a directory to be shared with&lt;br /&gt;
my ksu-cis-hpc group, Dan's ksu-cis-dan group, and an individual user 'mozes' who I also want&lt;br /&gt;
to have write access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc_dan_mozes&lt;br /&gt;
# acls are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
getfacl share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc_dan_mozes&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  user:mozes:rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  group:ksu-cis-dan:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:user:mozes:rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:group:ksu-cis-dan:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::--x&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
cd&lt;br /&gt;
mkdir public_html&lt;br /&gt;
# Opt-in to letting the webserver access your home directory:&lt;br /&gt;
setfacl -m g:public_html:x ~/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
We have a page here dedicated to [[Globus]]&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --array specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint/Restart using DMTCP ==&lt;br /&gt;
&lt;br /&gt;
DMTCP is Distributed Multi-Threaded CheckPoint software that will checkpoint your application without modification, and&lt;br /&gt;
can be set up to automatically restart your job from the last checkpoint if for example the node you are running on fails.  &lt;br /&gt;
This has been tested successfully&lt;br /&gt;
on Beocat for some scalar and OpenMP codes, but has failed on all MPI tests so far.  We would like to encourage users to&lt;br /&gt;
try DMTCP out if their non-MPI jobs run longer than 24 hours.  If you want to try this, please contact us first since we are still&lt;br /&gt;
experimenting with DMTCP.&lt;br /&gt;
&lt;br /&gt;
The sample job submission script below shows how dmtcp_launch is used to start the application, then dmtcp_restart is used to start from a checkpoint if the job has failed and been rescheduled.&lt;br /&gt;
If you are putting this in an array script, then add the Slurm array task ID to the end of the ckeckpoint directory name&lt;br /&gt;
like &amp;lt;B&amp;gt;ckptdir=ckpt-$SLURM_ARRAY_TASK_ID&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --mem=50G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS/2016.4-foss-2017beocatb-hybrid&lt;br /&gt;
  module load DMTCP&lt;br /&gt;
  module list&lt;br /&gt;
  &lt;br /&gt;
  ckptdir=ckpt&lt;br /&gt;
  mkdir -p $ckptdir&lt;br /&gt;
  export DMTCP_CHECKPOINT_DIR=$ckptdir&lt;br /&gt;
  &lt;br /&gt;
  if ! ls -1 $ckptdir | grep -c dmtcp_restart_script &amp;gt; /dev/null&lt;br /&gt;
  then&lt;br /&gt;
     echo &amp;quot;Using dmtcp_launch to start the app the first time&amp;quot;&lt;br /&gt;
     dmtcp_launch --no-coordinator mpirun -np 1 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -nsteps 50000 -ntomp 4 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  else&lt;br /&gt;
     echo &amp;quot;Using dmtcp_restart from $ckptdir to continue from a checkpoint&amp;quot;&lt;br /&gt;
     dmtcp_restart $ckptdir/*.dmtcp&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
You will need to run several tests to verify that DMTCP is working properly with your application.&lt;br /&gt;
First, run a short test without DMTCP and another with DMTCP with the checkpoint interval set to 5 minutes&lt;br /&gt;
by adding the line &amp;lt;B&amp;gt;export DMTCP_CHECKPOINT_INTERVAL=300&amp;lt;/B&amp;gt; to your script.  Then use &amp;lt;B&amp;gt;kstat -d 1&amp;lt;/B&amp;gt; to&lt;br /&gt;
check that the memory in both runs is close to the same.  Also use this information to calculate the time &lt;br /&gt;
that each checkpoint takes.  In most cases I've seen times less than a minute for checkpointing that will normally&lt;br /&gt;
be done once each hour.  If your application is taking more time, let us know.  Sometimes this can be sped up&lt;br /&gt;
by simply turning off compression by adding the line &amp;lt;B&amp;gt;export DMTCP_GZIP=0&amp;lt;/B&amp;gt;.  Make sure to remove the&lt;br /&gt;
line where you set the checkpoint interval to 300 seconds so that the default time of once per hour will be used.&lt;br /&gt;
&lt;br /&gt;
After verifying that your code completes using DMTCP and does not take significantly more time or memory, you&lt;br /&gt;
will need to start a run then &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; it after the first checkpoint, then resubmit the same script to make &lt;br /&gt;
sure that it restarts and runs to completion.  If you are working with an array job script, the last is to try a few&lt;br /&gt;
array tasks at once to make sure there is no conflict between the jobs.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;--gres=killable:1&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;scancel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, the group gets access to a &amp;quot;partition&amp;quot; giving you priority on those nodes.&lt;br /&gt;
&lt;br /&gt;
In most situations, the scheduler will automatically add those priority partitions to the jobs as submitted. You should not need to include a partition list in your job submission.&lt;br /&gt;
&lt;br /&gt;
There are currently just a few exceptions that we will not automatically add:&lt;br /&gt;
* ksu-chem-mri.q&lt;br /&gt;
* ksu-gen-gpu.q&lt;br /&gt;
* ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
If you have access to those any of the non-automatic partitions, and have need of the resources in that partition, you can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
Otherwise, you shouldn't modify the partition line at all unless you really know what you're doing.&lt;br /&gt;
&lt;br /&gt;
== Graphical Applications ==&lt;br /&gt;
Some applications are graphical and need to have some graphical input/output. We currently accomplish this with X11 forwarding or [[OpenOnDemand]]&lt;br /&gt;
=== OpenOnDemand ===&lt;br /&gt;
[[OpenOnDemand]] is likely the easier and more performant way to run a graphical application on the cluster.&lt;br /&gt;
# visit [https://ondemand.beocat.ksu.edu/ ondemand] and login with your cluster credentials.&lt;br /&gt;
# Check the &amp;quot;Interactive Apps&amp;quot; dropdown. We may have a workflow ready for you. If not choose the desktop.&lt;br /&gt;
# Select the resources you need&lt;br /&gt;
# Select launch&lt;br /&gt;
# A job is now submitted to the cluster and once the job is started you'll see a Connect button&lt;br /&gt;
# use the app as needed. If using the desktop, start your graphical application.&lt;br /&gt;
&lt;br /&gt;
=== X11 Forwarding ===&lt;br /&gt;
==== Connecting with an X11 client ====&lt;br /&gt;
===== Windows =====&lt;br /&gt;
If you are running Windows, we recommend MobaXTerm as your file/ssh manager, this is because it is one relatively simple tool to do everything. MobaXTerm also automatically connects with X11 forwarding enabled.&lt;br /&gt;
===== Linux/OSX =====&lt;br /&gt;
Both Linux and OSX can connect in an X11 forwarding mode. Linux will have all of the tools you need installed already, OSX will need [https://www.xquartz.org/ XQuartz] installed.&lt;br /&gt;
&lt;br /&gt;
Then you will need to change your 'ssh' command slightly:&lt;br /&gt;
&lt;br /&gt;
 ssh -Y eid@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
The '''-Y''' argument tells ssh to setup X11 forwarding.&lt;br /&gt;
==== Starting an Graphical job ====&lt;br /&gt;
All graphical jobs, by design, must be interactive, so we'll use the srun command. On a headnode, we run the following:&lt;br /&gt;
 # load an X11 enabled application&lt;br /&gt;
 module load Octave&lt;br /&gt;
 # start an X11 job, sbatch arguments are accepted for srun as well, 1 node, 1 hour, 1 gb of memory&lt;br /&gt;
 srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 octave --gui&lt;br /&gt;
&lt;br /&gt;
Because these jobs are interactive, they may not be able to run at all times, depending on how busy the scheduler is at any point in time. '''--pty --x11''' are required arguments setting up the job, and '''octave --gui''' is the command to run inside the job.&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=1138</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=1138"/>
		<updated>2025-06-26T13:46:48Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Running from a sbatch Submit Script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. 'kstat -g' will show you the GPU nodes and the jobs running on them.  To request a GPU node, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; for example to request 1 GPU for your job; if your job uses multiple nodes, the number of GPUs requested is per-node.  You can also request a given type of GPU (kstat -g -l to show types) by using &amp;lt;tt&amp;gt;--gres=gpu:geforce_gtx_1080_ti:1&amp;lt;/tt&amp;gt; for a 1080Ti GPU on the Wizards or Dwarves, &amp;lt;tt&amp;gt;--gres=gpu:quadro_gp100:1&amp;lt;/tt&amp;gt; for the P100 GPUs on Wizard20-21 that are best for 64-bit codes like Vasp.  Most of these GPU nodes are owned by various groups.  If you want access to GPU nodes and your group does not own any, we can add you to the &amp;lt;tt&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/tt&amp;gt; group that has priority on Dwarf36-39.  For more information on compiling CUDA code click on this [[CUDA]] link.&lt;br /&gt;
&lt;br /&gt;
A listing of the current types of gpus can be gathered with this command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
scontrol show nodes | grep CfgTRES | tr ',' '\n' | awk -F '[:=]' '/gres\/gpu:/ { print $2 }' | sort -u&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
At the time of this writing, that command produces this list:&lt;br /&gt;
* geforce_gtx_1080_ti&lt;br /&gt;
* geforce_rtx_2080_ti&lt;br /&gt;
* geforce_rtx_3090&lt;br /&gt;
* l40s&lt;br /&gt;
* quadro_gp100&lt;br /&gt;
* rtx_a4000&lt;br /&gt;
* rtx_a6000&lt;br /&gt;
&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
''Intra''node jobs run on many cores in the same node. These jobs can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or any programming language that has the concept of ''threads''. Often, your program will need to know how many cores you want it to use, and many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--nodes=1 --cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
''Inter''node jobs can utilize many cores on one or more nodes. Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--nodes=''n'' --ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=10 --ntasks=100&amp;lt;/tt&amp;gt; will give you a total of 100 cores across 10 nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=avx2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## There is one strict rule for guaranteeing Slurm reads all of your options:&lt;br /&gt;
## Do not put *any* lines above your resource requests that aren't either:&lt;br /&gt;
##    1) blank. (no other characters)&lt;br /&gt;
##    2) comments (lines must begin with '#')&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres=gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from contacting Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]&lt;br /&gt;
## to see how we can assist in getting your &lt;br /&gt;
## job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance.  Those needing more storage may purchase /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up. The /fastscratch file system is a zfs host with lots of NVME drives provide much faster&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to /fastscratch, the local disk on each node, or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in a purchased /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Due to the cost, directories will be provided when we are contacted and provided with payment information.&lt;br /&gt;
&lt;br /&gt;
===Fast Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /fastscratch file system is faster than /bulk or /homes.&lt;br /&gt;
In order to use fastscratch, you first need to make a directory for yourself.  &lt;br /&gt;
Fast Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the fastscratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /fastscratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  Use 'kstat -l -h' to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&lt;br /&gt;
# Once the move is complete, the Supervisor should limit the permissions for the directory again by removing the student's access:&lt;br /&gt;
chown $USER: -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
In the past, Beocat users have been allowed to keep their&lt;br /&gt;
/homes and /bulk directories open so that any other user could&lt;br /&gt;
access files.  In order to bring Beocat into alignment with&lt;br /&gt;
State of Kansas regulations and industry norms, all users must now have their /homes /bulk /scratch and /fastscratch directories&lt;br /&gt;
locked down from other users, but can still share files and directories within their group or with individual users&lt;br /&gt;
using group and individual ACLs (Access Control Lists) which will be explained below.&lt;br /&gt;
Beocat staff will be exempted from this&lt;br /&gt;
policy as we need to work freely with all users and will manage our&lt;br /&gt;
subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory with the setacls script===&lt;br /&gt;
&lt;br /&gt;
If you do not wish to share files or directories with other users, you do not need to do anything&lt;br /&gt;
as rwx access to others has already been removed.&lt;br /&gt;
If you want to share files or directories you can either use the '''setacls''' script or configure&lt;br /&gt;
the ACLs (Access Control Lists) manually.&lt;br /&gt;
&lt;br /&gt;
The '''setacls -h''' will show how to use the script.&lt;br /&gt;
  &lt;br /&gt;
  Eos: setacls -h&lt;br /&gt;
  setacls [-r] [-w] [-g group] [-u user] -d /full/path/to/directory&lt;br /&gt;
  Execute pemission will always be applied, you may also choose r or w&lt;br /&gt;
  Must specify at least one group or user&lt;br /&gt;
  Must specify at least one directory, and it must be the full path&lt;br /&gt;
  Example: setacls -r -g ksu-cis-hpc -u mozes -d /homes/daveturner/shared_dir&lt;br /&gt;
&lt;br /&gt;
You can specify the permissions to be either -r for read or -w for write or you can specify both.&lt;br /&gt;
You can provide a priority group to share with, which is the same as the group used in a --partition=&lt;br /&gt;
statement in a job submission script.  You can also specify users.&lt;br /&gt;
You can specify a file or a directory to share.  If the directory is specified then all files in that&lt;br /&gt;
directory will also be shared, and all files created in the directory laster will also be shared.&lt;br /&gt;
&lt;br /&gt;
The script will set everything up for you, telling you the commands it is executing along the way,&lt;br /&gt;
then show the resulting ACLs at the end with the '''getfacl''' command.  Below is an example of &lt;br /&gt;
sharing the directory '''test_directory''' in my /bulk/daveturner directory with Nathan.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
Beocat&amp;gt;  cd /bulk/daveturner&lt;br /&gt;
Beocat&amp;gt;  mkdir test_directory&lt;br /&gt;
Beocat&amp;gt;  setacls -r -w -u nathanrwells -d /bulk/daveturner/test_directory&lt;br /&gt;
&lt;br /&gt;
Opening up base directory /bulk/daveturner with X execute permission only&lt;br /&gt;
  setfacl -m u:nathanrwells:X /bulk/daveturner&lt;br /&gt;
&lt;br /&gt;
Setting Xrw for directory/file /bulk/daveturner/test_directory&lt;br /&gt;
  setfacl -m u:nathanrwells:Xrw -R /bulk/daveturner/test_directory&lt;br /&gt;
  setfacl -d -m u:nathanrwells:Xrw -R /bulk/daveturner/test_directory&lt;br /&gt;
&lt;br /&gt;
The ACLs on directory /bulk/daveturner/test_directory are set to:&lt;br /&gt;
&lt;br /&gt;
getfacl: Removing leading '/' from absolute path names&lt;br /&gt;
# file: bulk/daveturner/test_directory&lt;br /&gt;
USER   daveturner        rwx  rwx&lt;br /&gt;
user   nathanrwells      rwx  rwx&lt;br /&gt;
GROUP  daveturner_users  r-x  r-x&lt;br /&gt;
group  beocat_support    r-x  r-x&lt;br /&gt;
mask                     rwx  rwx&lt;br /&gt;
other                    ---  ---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The '''getfacl''' run by the script now shows that user '''nathanrwells''' has &lt;br /&gt;
read and write permissions to that directory and execute access to all directories&lt;br /&gt;
leading up to it.&lt;br /&gt;
&lt;br /&gt;
====Manually configuring your ACLs====&lt;br /&gt;
&lt;br /&gt;
If you want to manually configure the ACLs you can use the directions below to do what the '''setacls''' &lt;br /&gt;
script would do for you.&lt;br /&gt;
You first need to provide the minimum execute access to your /homes&lt;br /&gt;
or /bulk directory before sharing individual subdirectories.  Setting the ACL to execute only will allow those &lt;br /&gt;
in your group to get access to subdirectories while not including read access will mean they will not&lt;br /&gt;
be able to see other files or subdirectories on your main directory, but do keep in mind that they can still access them&lt;br /&gt;
so you may want to still lock them down manually.  Below is an example of how I would change my&lt;br /&gt;
/homes/daveturner directory to allow ksu-cis-hpc group execute access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:X /homes/daveturner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your research group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share_hpc', &lt;br /&gt;
then providing access to my ksu-cis.hpc group&lt;br /&gt;
(my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q).&lt;br /&gt;
Using -R will make these changes recursively to all files and directories in that subdirectory while changing the defaults with the setfacl -d command will ensure that files and directories created&lt;br /&gt;
later will be done so with these same ACLs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc&lt;br /&gt;
# ACLs are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share_hpc' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:ksu-cis-hpc:rwX -R share_hpc'&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
groups&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself&lt;br /&gt;
by emailing us at&lt;br /&gt;
beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]&lt;br /&gt;
.&lt;br /&gt;
If you want to share a directory with only a few people you can manage your ACLs using individual usernames&lt;br /&gt;
instead of with a group.&lt;br /&gt;
&lt;br /&gt;
You can use the '''getfacl''' command to see groups have access to a given directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
getfacl share_hpc&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ACLs give you great flexibility in controlling file access at the&lt;br /&gt;
group level.  Below is a more advanced example where I set up a directory to be shared with&lt;br /&gt;
my ksu-cis-hpc group, Dan's ksu-cis-dan group, and an individual user 'mozes' who I also want&lt;br /&gt;
to have write access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc_dan_mozes&lt;br /&gt;
# acls are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
getfacl share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc_dan_mozes&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  user:mozes:rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  group:ksu-cis-dan:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:user:mozes:rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:group:ksu-cis-dan:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::--x&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
cd&lt;br /&gt;
mkdir public_html&lt;br /&gt;
# Opt-in to letting the webserver access your home directory:&lt;br /&gt;
setfacl -m g:public_html:x ~/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
We have a page here dedicated to [[Globus]]&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --array specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint/Restart using DMTCP ==&lt;br /&gt;
&lt;br /&gt;
DMTCP is Distributed Multi-Threaded CheckPoint software that will checkpoint your application without modification, and&lt;br /&gt;
can be set up to automatically restart your job from the last checkpoint if for example the node you are running on fails.  &lt;br /&gt;
This has been tested successfully&lt;br /&gt;
on Beocat for some scalar and OpenMP codes, but has failed on all MPI tests so far.  We would like to encourage users to&lt;br /&gt;
try DMTCP out if their non-MPI jobs run longer than 24 hours.  If you want to try this, please contact us first since we are still&lt;br /&gt;
experimenting with DMTCP.&lt;br /&gt;
&lt;br /&gt;
The sample job submission script below shows how dmtcp_launch is used to start the application, then dmtcp_restart is used to start from a checkpoint if the job has failed and been rescheduled.&lt;br /&gt;
If you are putting this in an array script, then add the Slurm array task ID to the end of the ckeckpoint directory name&lt;br /&gt;
like &amp;lt;B&amp;gt;ckptdir=ckpt-$SLURM_ARRAY_TASK_ID&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --mem=50G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS/2016.4-foss-2017beocatb-hybrid&lt;br /&gt;
  module load DMTCP&lt;br /&gt;
  module list&lt;br /&gt;
  &lt;br /&gt;
  ckptdir=ckpt&lt;br /&gt;
  mkdir -p $ckptdir&lt;br /&gt;
  export DMTCP_CHECKPOINT_DIR=$ckptdir&lt;br /&gt;
  &lt;br /&gt;
  if ! ls -1 $ckptdir | grep -c dmtcp_restart_script &amp;gt; /dev/null&lt;br /&gt;
  then&lt;br /&gt;
     echo &amp;quot;Using dmtcp_launch to start the app the first time&amp;quot;&lt;br /&gt;
     dmtcp_launch --no-coordinator mpirun -np 1 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -nsteps 50000 -ntomp 4 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  else&lt;br /&gt;
     echo &amp;quot;Using dmtcp_restart from $ckptdir to continue from a checkpoint&amp;quot;&lt;br /&gt;
     dmtcp_restart $ckptdir/*.dmtcp&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
You will need to run several tests to verify that DMTCP is working properly with your application.&lt;br /&gt;
First, run a short test without DMTCP and another with DMTCP with the checkpoint interval set to 5 minutes&lt;br /&gt;
by adding the line &amp;lt;B&amp;gt;export DMTCP_CHECKPOINT_INTERVAL=300&amp;lt;/B&amp;gt; to your script.  Then use &amp;lt;B&amp;gt;kstat -d 1&amp;lt;/B&amp;gt; to&lt;br /&gt;
check that the memory in both runs is close to the same.  Also use this information to calculate the time &lt;br /&gt;
that each checkpoint takes.  In most cases I've seen times less than a minute for checkpointing that will normally&lt;br /&gt;
be done once each hour.  If your application is taking more time, let us know.  Sometimes this can be sped up&lt;br /&gt;
by simply turning off compression by adding the line &amp;lt;B&amp;gt;export DMTCP_GZIP=0&amp;lt;/B&amp;gt;.  Make sure to remove the&lt;br /&gt;
line where you set the checkpoint interval to 300 seconds so that the default time of once per hour will be used.&lt;br /&gt;
&lt;br /&gt;
After verifying that your code completes using DMTCP and does not take significantly more time or memory, you&lt;br /&gt;
will need to start a run then &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; it after the first checkpoint, then resubmit the same script to make &lt;br /&gt;
sure that it restarts and runs to completion.  If you are working with an array job script, the last is to try a few&lt;br /&gt;
array tasks at once to make sure there is no conflict between the jobs.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;--gres=killable:1&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;scancel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, the group gets access to a &amp;quot;partition&amp;quot; giving you priority on those nodes.&lt;br /&gt;
&lt;br /&gt;
In most situations, the scheduler will automatically add those priority partitions to the jobs as submitted. You should not need to include a partition list in your job submission.&lt;br /&gt;
&lt;br /&gt;
There are currently just a few exceptions that we will not automatically add:&lt;br /&gt;
* ksu-chem-mri.q&lt;br /&gt;
* ksu-gen-gpu.q&lt;br /&gt;
* ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
If you have access to those any of the non-automatic partitions, and have need of the resources in that partition, you can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
Otherwise, you shouldn't modify the partition line at all unless you really know what you're doing.&lt;br /&gt;
&lt;br /&gt;
== Graphical Applications ==&lt;br /&gt;
Some applications are graphical and need to have some graphical input/output. We currently accomplish this with X11 forwarding or [[OpenOnDemand]]&lt;br /&gt;
=== OpenOnDemand ===&lt;br /&gt;
[[OpenOnDemand]] is likely the easier and more performant way to run a graphical application on the cluster.&lt;br /&gt;
# visit [https://ondemand.beocat.ksu.edu/ ondemand] and login with your cluster credentials.&lt;br /&gt;
# Check the &amp;quot;Interactive Apps&amp;quot; dropdown. We may have a workflow ready for you. If not choose the desktop.&lt;br /&gt;
# Select the resources you need&lt;br /&gt;
# Select launch&lt;br /&gt;
# A job is now submitted to the cluster and once the job is started you'll see a Connect button&lt;br /&gt;
# use the app as needed. If using the desktop, start your graphical application.&lt;br /&gt;
&lt;br /&gt;
=== X11 Forwarding ===&lt;br /&gt;
==== Connecting with an X11 client ====&lt;br /&gt;
===== Windows =====&lt;br /&gt;
If you are running Windows, we recommend MobaXTerm as your file/ssh manager, this is because it is one relatively simple tool to do everything. MobaXTerm also automatically connects with X11 forwarding enabled.&lt;br /&gt;
===== Linux/OSX =====&lt;br /&gt;
Both Linux and OSX can connect in an X11 forwarding mode. Linux will have all of the tools you need installed already, OSX will need [https://www.xquartz.org/ XQuartz] installed.&lt;br /&gt;
&lt;br /&gt;
Then you will need to change your 'ssh' command slightly:&lt;br /&gt;
&lt;br /&gt;
 ssh -Y eid@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
The '''-Y''' argument tells ssh to setup X11 forwarding.&lt;br /&gt;
==== Starting an Graphical job ====&lt;br /&gt;
All graphical jobs, by design, must be interactive, so we'll use the srun command. On a headnode, we run the following:&lt;br /&gt;
 # load an X11 enabled application&lt;br /&gt;
 module load Octave&lt;br /&gt;
 # start an X11 job, sbatch arguments are accepted for srun as well, 1 node, 1 hour, 1 gb of memory&lt;br /&gt;
 srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 octave --gui&lt;br /&gt;
&lt;br /&gt;
Because these jobs are interactive, they may not be able to run at all times, depending on how busy the scheduler is at any point in time. '''--pty --x11''' are required arguments setting up the job, and '''octave --gui''' is the command to run inside the job.&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=GalaxyDocs&amp;diff=1137</id>
		<title>GalaxyDocs</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=GalaxyDocs&amp;diff=1137"/>
		<updated>2025-06-26T13:46:01Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Requesting Help */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Galaxy? ==&lt;br /&gt;
[https://galaxyproject.org/ Galaxy] is a scientific workflow, data integration, and data and analysis persistence and publishing platform that aims to make computational biology accessible to research scientists that do not have computer programming or systems administration experience.&lt;br /&gt;
&lt;br /&gt;
== How do I access Galaxy? == &lt;br /&gt;
Access to Beocat's local instance of Galaxy is easy. Simply navigate to [https://galaxy.beocat.ksu.edu/ https://galaxy.beocat.ksu.edu/] and sign in if prompted using the Keycloak login. This will use your Beocat EID and Password to login, you will also be prompted to authenticate against DUO.&lt;br /&gt;
&lt;br /&gt;
Please note that this utilizes Beocat's /bulk directory. This is a billed directory, and as such, if usage in your respective upload directory for Galaxy exceeds the billing threshold (1T), we will contact you regarding remediation.&lt;br /&gt;
&lt;br /&gt;
== Upload Larger Files to Galaxy ==&lt;br /&gt;
Have some larger files that need to be uploaded to Galaxy? We provide some documentation on how to upload files directly to Beocat and then import them to Galaxy for use. It can be found here: [[Galaxy_File_Upload| Galaxy File Upload]]&lt;br /&gt;
&lt;br /&gt;
== How do I use Galaxy? ==&lt;br /&gt;
&lt;br /&gt;
After accessing our local instance of Galaxy you should have access to all installed tools which will be in the tool panel on the left-hand side of your home screen on Galaxy. It should look like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Galaxy Toolbox.png|Tool Panel]]&lt;br /&gt;
&lt;br /&gt;
Each category (Get Data, Send Data, MetaGenomics Tools, etc) is expandable reveal subcategories that help sort out many installed tools. Each tool will be placed under its respective category once it is installed.&lt;br /&gt;
&lt;br /&gt;
From here you can select a tool to use and submit to the Slurm cluster. After opening a tool you will be given options (if available) to configure the tool, including its inputs, what the tool will do (within its constraints) outputs, and what compute resources the tool will submit with. This means that you can specify the resources given to slurm jobs by expanding the drop-down menu &amp;quot;Job Resource Paramters&amp;quot; and selecting &amp;quot;Specify Job resource parameters&amp;quot; which will show the following options to modify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;Processors&amp;lt;/B&amp;gt;: Number of processing cores, 'ppn' value (1-128) this is equivalent to slurms &amp;quot;--cpus-per-task&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;B&amp;gt;Memory&amp;lt;/B&amp;gt;: Memory size in gigabytes, 'pmem' value (1-1500). Note that the job scheduler uses --mem-per-cpu to allocate memory for your slurm job. This mean the number given for memory will be multiplied by the Processors count from above. I.e 2 Processors with 5 Memory will be 10GB of memory.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;B&amp;gt;Priority Queue&amp;lt;/B&amp;gt;: If you have access to a priority queue, and would like to use it, enter the partition name here.&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;B&amp;gt;Runtime&amp;lt;/B&amp;gt;: How long you want your job to run for in hours. &lt;br /&gt;
&lt;br /&gt;
If you fail to switch your job resource parameters, your job will submit with the default resources. This means it will submit with 5Gb of Memory and 1 CPU with 1 hour of Runtime and no Priority queue.&lt;br /&gt;
&lt;br /&gt;
=== Canceling a running job ===&lt;br /&gt;
While Galaxy does submit to slurm, you will not be able to cancel the job in the same way you typically do. With Galaxy, to cancel your upcoming/currently running job simply press the trash can icon next the name of your job.&lt;br /&gt;
&lt;br /&gt;
== Tool Requests ==&lt;br /&gt;
If you are missing a specific tool and would like to have it added to Galaxy, please contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] with a link to the tool. Additionally, you can browse through Galaxy's own [https://toolshed.g2.bx.psu.edu/ toolshed] to make a recommendation.&lt;br /&gt;
&lt;br /&gt;
== Data Management ==&lt;br /&gt;
&lt;br /&gt;
Galaxy follows the typical costs for bulk data storage, as Galaxy utilizes /bulk/galaxy for storage. Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Billing starting at 1TB of usage. Users can easily see how much data they are utilizing in galaxy by checking the top right corner of the 'home' page of galaxy. This will say &amp;quot;Using ####MB/GB/TB&amp;quot;, above your histories.&lt;br /&gt;
&lt;br /&gt;
[[File:Galaxy_Data_usage_example.png|Usage Data]]&lt;br /&gt;
&lt;br /&gt;
Clicking on this usage will bring you to a storage dashboard where you can easily manage your files and derelict dataset histories.&lt;br /&gt;
&lt;br /&gt;
[[File:Storage_dashboard.png|Storage Dashboard]]&lt;br /&gt;
&lt;br /&gt;
== Requesting Help ==&lt;br /&gt;
To request help with [https://galaxy.beocat.ksu.edu/ https://galaxy.beocat.ksu.edu/], please contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket].&amp;lt;br&amp;gt;&lt;br /&gt;
When requesting help, it is best to give as much information as possible so that we may solve your issue in a timely manner.&lt;br /&gt;
&lt;br /&gt;
== Acknowledgements ==&lt;br /&gt;
Beocat's installation of UseGalaxy is funded through K-INBRE with an Institutional Development Award (IDeA) from the National Institute of General Medical Sciences of the National Institutes of Health under grant number P20GM103418. &lt;br /&gt;
&lt;br /&gt;
This initiative was started through the Data Science Core group to bring easy to use GUI-based computational biology research to students and researchers at Kansas State University through Beocat.&lt;br /&gt;
&lt;br /&gt;
Additional information on K-INBRE can be found [https://www.k-inbre.org/pages/k-inbre_about_bio-core.html here]&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Nautilus&amp;diff=1136</id>
		<title>Nautilus</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Nautilus&amp;diff=1136"/>
		<updated>2025-06-26T13:45:03Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Nautilus */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Nautilus ==&lt;br /&gt;
To access the Nautilus namespace, login using K-State SSO at https://portal.nrp-nautilus.io/ . Once you have done so, contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] and request to be added to the Beocat Nautilus namespace (ksu-nrp-cluster). Once you have received notification that you have been added to the namespace, you can continue with the following steps to get set up to use the cluster resources. &lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;SSH into headnode.beocat.ksu.edu&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;SSH into fiona (fiona hosts the kubectl tool we will use for this)&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Once on fiona, use the command ‘cd ~’ to ensure you are in your home directory. If you&lt;br /&gt;
are not, this will return you to the top level of your home directory.&amp;lt;li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;From there you will need to create a .kube directory inside of your home directory. Use&lt;br /&gt;
the command ‘mkdir ~/.kube’&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login to https://portal.nrp-nautilus.io/ using the same login previously used to create your&lt;br /&gt;
account (this will be your K-State EID login)&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;From here it is MANDATORY to read the cluster policy documentation provided by the&lt;br /&gt;
National Research Platform for the Nautilus program. You can find this here.&lt;br /&gt;
https://docs.nationalresearchplatform.org/userdocs/start/policies/ &amp;lt;/li&amp;gt;&lt;br /&gt;
a. This is to ensure we do not break any of the rules put in place by the NRP.&lt;br /&gt;
&amp;lt;br&amp;gt;b. Return to https://portal.nrp-nautilus.io/ and accept the Acceptable Use Policy (AUP)&lt;br /&gt;
&amp;lt;li&amp;gt;Next, return to the website specified in step 5, in the top right corner of the page press&lt;br /&gt;
the “Get Config” option. &amp;lt;/li&amp;gt;&lt;br /&gt;
a. This will download a file called ‘config’&lt;br /&gt;
&amp;lt;li&amp;gt;You will need to move the file to your ~/.kube directory created in step 4.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. To do this you can copy and paste the contents through the command line&lt;br /&gt;
&amp;lt;br&amp;gt;b. You can also utilize the OpenOnDemand tool to upload the file through the web&lt;br /&gt;
interface. Information for this tool can be found here:&lt;br /&gt;
https://support.beocat.ksu.edu/Docs/OpenOnDemand&lt;br /&gt;
&amp;lt;br&amp;gt;c. You can also use other means of moving the contents to the Beocat&lt;br /&gt;
headnodes/your home directory, but these are just a few examples.&lt;br /&gt;
&amp;lt;br&amp;gt;d. NOTE: Because we added a period before the directory name it is now a hidden directory,&lt;br /&gt;
and the directory will not appear when running a normal ‘ls’, to see the directory you will&lt;br /&gt;
need to run “ls -a” or “ls -la”.&lt;br /&gt;
&amp;lt;li&amp;gt;Once you have read the required documentation, created the .kube directory in your&lt;br /&gt;
home directory, and placed the config file in the '~/.kube' directory, you are now ready to continue!&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Below is an example pod that can be used. It does not request much in the way of resources so you will likely need to change some things. Be sure to change the “name:” field&lt;br /&gt;
underneath “metadata:”. Change the text “test-pod” to “{eid}-pod” where ‘{eid}’ is your&lt;br /&gt;
K-State ID. It will look something like this “dan-pod”.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=yaml&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  name: test-pod&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - name: mypod&lt;br /&gt;
    image: ubuntu&lt;br /&gt;
    resources:&lt;br /&gt;
      limits:&lt;br /&gt;
        memory: 400Mi&lt;br /&gt;
        cpu: 100m&lt;br /&gt;
      requests:&lt;br /&gt;
        memory: 100Mi&lt;br /&gt;
        cpu: 100m&lt;br /&gt;
    command: [&amp;quot;sh&amp;quot;, &amp;quot;-c&amp;quot;, &amp;quot;echo 'Im a new pod'&amp;quot;]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Place your .yaml file in the same directory created earlier (~/.kube).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;If you are not already in the .kube directory enter the command “cd ~/.kube” to change&lt;br /&gt;
your current directory.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Now we are going to create our ‘pod’. This will request a ubuntu pc using the&lt;br /&gt;
specifications from above.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. To do this enter the command “kubectl create -f pod1.yaml” NOTE: You must be&lt;br /&gt;
in the same directory that you placed the pod1.yaml file in (in this situation, the above pod config was put into a file named pod1.yaml).&lt;br /&gt;
&amp;lt;br&amp;gt;b. If the command is successful you will see an output of “pod/{eid}-pod created”.&lt;br /&gt;
&amp;lt;li&amp;gt;You will need to wait until the container for the pod is finished creating. You can check&lt;br /&gt;
this by running “kubectl get pods”&amp;lt;/li&amp;gt;&lt;br /&gt;
a. Once you run this command, it will output all the pods currently running or being&lt;br /&gt;
created in the namespace. Look for yours among the list of pods, the name will&lt;br /&gt;
be the same name specified in step 10.&lt;br /&gt;
&amp;lt;br&amp;gt;b. Once you locate your pod, check its STATUS. If the pod says Running, then you&lt;br /&gt;
are good to proceed. If it says Container Creating, then you will need to wait just a&lt;br /&gt;
bit. It should not take long.&lt;br /&gt;
&amp;lt;li&amp;gt;You can now execute and enter the pod by running “kubectl exec -it {eid}-pod --&lt;br /&gt;
/bin/bash”. Where ‘{eid}-pod’ is the pod created in step 13/the name specified in step 10.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. Executing this command will open the pod you created and run a bash console&lt;br /&gt;
on the pod.&lt;br /&gt;
&amp;lt;br&amp;gt;b. NOTE: If you have trouble logging into the pod, and are met with a “You must be&lt;br /&gt;
logged in to the server, you can run “kubectl proxy”, and after a moment, you can&lt;br /&gt;
cancel the command with a “crtl+c”. This should remedy the error.&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional documentation for Kubernetes can be found on the Kubernetes website https://kubernetes.io/docs/home&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=CUDA&amp;diff=1135</id>
		<title>CUDA</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=CUDA&amp;diff=1135"/>
		<updated>2025-06-26T13:44:37Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* CUDA Overview */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== CUDA Overview ==&lt;br /&gt;
[[wikipedia:CUDA|CUDA]] is a feature set for programming nVidia [[wikipedia:Graphics_processing_unit|GPUs]]. We have many dwarf nodes that are CUDA-enabled with 1-2 GPUs and most of the Wizard nodes have 4 GPUs each. Most of these are consumer grade [https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/ nVidia 1080 Ti graphics cards] that are good for accelerating 32-bit calculations. Dwarf36-38 have two [https://www.nvidia.com/en-us/design-visualization/rtx-a4000/ nVidia RTX A4000 graphic cards] and dwarf39 has two [https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/ nVidia 1080 Ti graphics cards] that are available for anybody to use but you'll need to contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] or email beocat@cs.ksu.edu to request being added to the GPU priority group then you'll need to submit jobs with &amp;lt;B&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/B&amp;gt;.  Wizard20 and wizard21 each have two [https://www.nvidia.com/object/quadro-graphics-with-pascal.html nVidia P100 cards] that are much more costly than the consumer grade 1080Ti cards but can accelerate 64-bit calculations much better.&lt;br /&gt;
&lt;br /&gt;
== Training videos ==&lt;br /&gt;
CUDA Programming Model Overview: [http://www.youtube.com/watch?v=aveYOlBSe-Y http://www.youtube.com/watch?v=aveYOlBSe-Y]&lt;br /&gt;
&lt;br /&gt;
{{#widget:YouTube|id=aveYOlBSe-Y|width=800|height=600}}&lt;br /&gt;
&lt;br /&gt;
CUDA Programming Basics Part I (Host functions): [http://www.youtube.com/watch?v=79VARRFwQgY http://www.youtube.com/watch?v=79VARRFwQgY]&lt;br /&gt;
&lt;br /&gt;
{{#widget:YouTube|id=79VARRFwQgY|width=800|height=600}}&lt;br /&gt;
&lt;br /&gt;
CUDA Programming Basics Part II (Device functions): [http://www.youtube.com/watch?v=G5-iI1ogDW4 http://www.youtube.com/watch?v=G5-iI1ogDW4]&lt;br /&gt;
&lt;br /&gt;
{{#widget:YouTube|id=G5-iI1ogDW4|width=800|height=600}}&lt;br /&gt;
== Compiling CUDA Applications ==&lt;br /&gt;
nvcc is the compiler for CUDA applications. When compiling your applications manually you will need to load a CUDA enabled compiler toolchain (e.g. fosscuda):&lt;br /&gt;
&lt;br /&gt;
* module load fosscuda&lt;br /&gt;
* '''Do not run your cuda applications on the headnode. I cannot guarantee it will run, and it will give you terrible results if it does run.'''&lt;br /&gt;
&lt;br /&gt;
With those two things in mind, you can compile CUDA applications as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load fosscuda&lt;br /&gt;
nvcc &amp;lt;source&amp;gt;.cu -o &amp;lt;output&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
=== Create your Application ===&lt;br /&gt;
Copy the following Application into Beocat as vecadd.cu&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
//  Kernel definition, see also section 4.2.3 of Nvidia Cuda Programming Guide&lt;br /&gt;
__global__  void vecAdd(float* A, float* B, float* C)&lt;br /&gt;
{&lt;br /&gt;
            // threadIdx.x is a built-in variable  provided by CUDA at runtime&lt;br /&gt;
            int i = threadIdx.x;&lt;br /&gt;
       A[i]=0;&lt;br /&gt;
       B[i]=i;&lt;br /&gt;
       C[i] = A[i] + B[i];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
#include  &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#define  SIZE 10&lt;br /&gt;
int  main()&lt;br /&gt;
{&lt;br /&gt;
   int N=SIZE;&lt;br /&gt;
   float A[SIZE], B[SIZE], C[SIZE];&lt;br /&gt;
   float *devPtrA;&lt;br /&gt;
   float *devPtrB;&lt;br /&gt;
   float *devPtrC;&lt;br /&gt;
   int memsize= SIZE * sizeof(float);&lt;br /&gt;
&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrA, memsize);&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrB, memsize);&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrC, memsize);&lt;br /&gt;
   cudaMemcpy(devPtrA, A, memsize,  cudaMemcpyHostToDevice);&lt;br /&gt;
   cudaMemcpy(devPtrB, B, memsize,  cudaMemcpyHostToDevice);&lt;br /&gt;
   // __global__ functions are called:  Func&amp;lt;&amp;lt;&amp;lt; Dg, Db, Ns  &amp;gt;&amp;gt;&amp;gt;(parameter);&lt;br /&gt;
   vecAdd&amp;lt;&amp;lt;&amp;lt;1, N&amp;gt;&amp;gt;&amp;gt;(devPtrA,  devPtrB, devPtrC);&lt;br /&gt;
   cudaMemcpy(C, devPtrC, memsize,  cudaMemcpyDeviceToHost);&lt;br /&gt;
&lt;br /&gt;
   for (int i=0; i&amp;lt;SIZE; i++)&lt;br /&gt;
        printf(&amp;quot;C[%d]=%f\n&amp;quot;,i,C[i]);&lt;br /&gt;
&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== Gain Access to a CUDA-capable Node ===&lt;br /&gt;
See our [[AdvancedSlurm|advanced scheduler documentation]]&lt;br /&gt;
=== Compile Your Application ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load fosscuda&lt;br /&gt;
nvcc vecadd.cu -o vecadd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will create a program with the name 'vecadd' (specified by the '-o' flag).&lt;br /&gt;
&lt;br /&gt;
=== Run Your Application ===&lt;br /&gt;
Run the program as you usually would, namely&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./vecadd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assuming you don't want to run the program interactively because this is a large job, you can submit a job via sbatch, just be sure to add '&amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt;' to the '''sbatch''' directive.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1134</id>
		<title>Galaxy File Upload</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1134"/>
		<updated>2025-06-26T13:43:37Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Upload Files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Large File Uploads to Galaxy =&lt;br /&gt;
The Galaxy web UI is sometimes inconsistent with large downloads. As such, we have made user directory imports available on Galaxy. You can upload files to /bulk/galaxy/user-space/, and locating the user space that was created for you on first login. The folder is usually named &amp;quot;eid@ksu.edu&amp;quot; or if you are from another college (VetMed for instance) &amp;quot;eid@vet.k-state.edu&amp;quot;. You can use the command &amp;quot;ls /bulk/galaxy/user-space&amp;quot; to find the name of your directory. &lt;br /&gt;
&lt;br /&gt;
From here, we have written some instructions on how to utilize this upload method.&lt;br /&gt;
== Upload Files ==&lt;br /&gt;
First, you need to transfer the data onto Beocat, this can be done through any number of ways. We have some documentation on uploading data into Beocat that can be found here (for large files, we suggest using Globus, scp or an FTP program): https://support.beocat.ksu.edu/Docs/Main_Page#Transferring_data_to_Beocat&lt;br /&gt;
&lt;br /&gt;
Next, due to how Galaxy handles data exposure to the web UI, we need to move this data to a userspace that was created for you in Galaxy on your first login:&lt;br /&gt;
&lt;br /&gt;
Move the data to the following directory (You should be able to move data here, if you are unable to, please contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]) or email us at beocat@cs.ksu.edu. Note that the backwards slash is a necessary character to escape the @ symbol from your email address: &lt;br /&gt;
/bulk/galaxy/user-space/eid\@vet.k-state.edu/&lt;br /&gt;
&lt;br /&gt;
Next, login to galaxy.beocat.ksu.edu with your eID through KeyCloak. &lt;br /&gt;
&lt;br /&gt;
Then, navigate to the &amp;quot;Shared Data&amp;quot; tab at the top of the page, in the drop-down menu, navigate to &amp;quot;Data Libraries&amp;quot;. This should take you to a relatively empty page that allows you to create a library with a &amp;quot;+ Library&amp;quot; in the top left of the web page. For reference, I have included a screenshot of my Data Libraries: &lt;br /&gt;
&lt;br /&gt;
[[File:Data library creation.png|CreatingDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
Create a new Library with any name, and then open it by clicking on its name. In this case, from the photo above, you could click on &amp;quot;Test2&amp;quot; or &amp;quot;Upload&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
This takes you to a page that will let you manage the data inside of that library. It should look like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Library explanation.png|ExplainingDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
From here you can either add a folder to help manage your data, upload data to the library, or add the current data in the library to your History so that it can be accessed. We are going to upload data to the library. Clicking &amp;quot;+ Datasets&amp;quot; will expand a drop-down menu, from here, select &amp;quot;from User Directory&amp;quot;. This will allow you to upload any data to galaxy from that /bulk directory we moved your data to earlier on Beocat. That process looks something like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Import files.png|ImportToDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
In this, I just uploaded two empty text files. From here, it has to do some auto-magic to get the data to work inside galaxy, with the &amp;quot;state&amp;quot; of the data. I am not exactly sure what this, or how long it might take. I would imagine not long. &lt;br /&gt;
&lt;br /&gt;
From here we finally need to publish the data to our history as a so that we can utilize it. I am going to import this data as a Dataset, though if a collection suits better, use that. In the photo below, I am uploading &amp;quot;iamanothertextfile.txt&amp;quot; to my new history called &amp;quot;IAmANewHistory&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once that is added a notification should pop up in the bottom right hand corner of the browser. If not, navigate to the homepage and manually open the History your data was added to. From here you should be able to process your data like normal.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=ProposalDescription&amp;diff=1133</id>
		<title>ProposalDescription</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=ProposalDescription&amp;diff=1133"/>
		<updated>2025-06-26T13:42:46Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Description of Beocat for Proposals or Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description of Beocat for Proposals or Information ==&lt;br /&gt;
&lt;br /&gt;
Below is a current description of Beocat Compute resources and availability that can be used within proposals or to provide information. If you have any questions regarding this, please contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] or reach out to us at beocat@cs.ksu.edu.&lt;br /&gt;
&lt;br /&gt;
=== Compute Resources Description (Updated Oct. 1 2024) ===&lt;br /&gt;
&lt;br /&gt;
Beocat, the K-State research computing cluster, is currently the largest academic supercomputer in Kansas. Its hardware includes nearly 400 researcher-funded computers, approximately 3.3PB of storage, ~10,000 processor cores on machines ranging from dual-processor Xeon e5 nodes with 128GB RAM and 100GbE to 128 core AMD nodes with 2TB RAM connected by 40-100 Gbps networks, and a total of 170 different GPUs ranging from GTX 1080ti's to NVIDIA L40S's. Beocat and its staff have provided tours demonstrating the value of K-State research and a high-tech look at our research facilities for over 3,000 participants, including USD383 StarBase, current and prospective students, funding agencies, faculty recruitment, and outreach activities. Classes supported include topics such as bioinformatics, business analytics, cybersecurity, data science, deep learning, economics, chemistry, and genetics. Beocat is supported by many NSF and university grants, and it acts as the central computing resource for multiple departments across campus. Beocat staff includes one full-time system administrators, a full-time applications scientist with a PhD in Physics and 35 years’ experience optimizing parallel programs and assisting researchers, and a part-time director. &lt;br /&gt;
&lt;br /&gt;
Beocat is available to any academic researcher in Kansas and their partners under the statewide KanShare MOU. Under current policy, heavy users are expected to buy in through adding computational or personnel resources for the cluster (condo computing). Their jobs, then, are given guaranteed priority on any contributed machines, and they have access to other resources in the cluster on an as-available basis. Thus, projects can preserve a guaranteed base level of computation while utilizing the larger cluster for major computations. Users can also purchase archival data storage as needed. Dr. Daniel Andresen is the K-State XSEDE Campus Champion in the event national-class computational resources are required.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1132</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1132"/>
		<updated>2025-06-26T13:42:06Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* How do I get help? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of RHEL Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
* We provide a short description of Beocat for the uses of a proposal or teaching here: [[ProposalDescription|Beocat Info]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
=== Online Documentations ===&lt;br /&gt;
&lt;br /&gt;
* Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/]&lt;br /&gt;
* Read about  [[Installed software]] and languages&lt;br /&gt;
* Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] and download the [[Media:Slurm-quick-reference.pdf|Slurm Quick Reference PDF]]&lt;br /&gt;
* Run interactive jobs with [[OpenOnDemand]]&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]]&lt;br /&gt;
* Big Data course on Beocat! [[BigDataOnBeocat]]&lt;br /&gt;
* Interested in web-based computational biology research? Check out [[GalaxyDocs|Galaxy!]]&lt;br /&gt;
* Looking to utilize the NRP (Nautilus cluster) namespace? Check out [[Nautilus|Nautilus on Beocat]]&lt;br /&gt;
&lt;br /&gt;
=== Training Videos and Slides ===&lt;br /&gt;
&lt;br /&gt;
* [https://www.youtube.com/watch?v=7NOB_HGQE0U Beocat Introduction] and [[Media:Beocat-Beoshock-Intro.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=b_yawpwFRdk Linux and Bash Introduction] and [[Media:Linux-Intro-cheatsheet.pdf|Linux Quick Reference PDF]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=vcC-DURbH6c Advanced HPC Usage] and [[Media:HPC-Advanced-Usage.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=inJbYdZacjs HPC Parallel Computing] and [[Media:HPC-Parallel-Computing.pdf|slides]]&lt;br /&gt;
&lt;br /&gt;
== Transferring data to Beocat ==&lt;br /&gt;
Transferring data to Beocat can be done through a variety of ways, we offer documentation on a few of them:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;b&amp;gt;With the recent changes to how KState handles DUO authentication, we recommend you use Globus to transfer files in and out of Beocat&amp;lt;/b&amp;gt;&lt;br /&gt;
* [[Globus]] - Instructions on transferring files using [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''.&lt;br /&gt;
* [[LinuxBasics]] - Under the 'Transferring files (SCP or SFTP)' section, we have information regarding SCP and SFTP implementation.&lt;br /&gt;
* [[OpenOnDemand]] - We offer GUI based file management through OpenOnDemand&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] - We also offer the ability to transfer data to and from OneDrive&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] or you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address or through TDX and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] or email us at beocat@cs.ksu.edu, please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1131</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1131"/>
		<updated>2025-06-26T13:41:23Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of RHEL Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
* We provide a short description of Beocat for the uses of a proposal or teaching here: [[ProposalDescription|Beocat Info]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
=== Online Documentations ===&lt;br /&gt;
&lt;br /&gt;
* Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/]&lt;br /&gt;
* Read about  [[Installed software]] and languages&lt;br /&gt;
* Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] and download the [[Media:Slurm-quick-reference.pdf|Slurm Quick Reference PDF]]&lt;br /&gt;
* Run interactive jobs with [[OpenOnDemand]]&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]]&lt;br /&gt;
* Big Data course on Beocat! [[BigDataOnBeocat]]&lt;br /&gt;
* Interested in web-based computational biology research? Check out [[GalaxyDocs|Galaxy!]]&lt;br /&gt;
* Looking to utilize the NRP (Nautilus cluster) namespace? Check out [[Nautilus|Nautilus on Beocat]]&lt;br /&gt;
&lt;br /&gt;
=== Training Videos and Slides ===&lt;br /&gt;
&lt;br /&gt;
* [https://www.youtube.com/watch?v=7NOB_HGQE0U Beocat Introduction] and [[Media:Beocat-Beoshock-Intro.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=b_yawpwFRdk Linux and Bash Introduction] and [[Media:Linux-Intro-cheatsheet.pdf|Linux Quick Reference PDF]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=vcC-DURbH6c Advanced HPC Usage] and [[Media:HPC-Advanced-Usage.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=inJbYdZacjs HPC Parallel Computing] and [[Media:HPC-Parallel-Computing.pdf|slides]]&lt;br /&gt;
&lt;br /&gt;
== Transferring data to Beocat ==&lt;br /&gt;
Transferring data to Beocat can be done through a variety of ways, we offer documentation on a few of them:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;b&amp;gt;With the recent changes to how KState handles DUO authentication, we recommend you use Globus to transfer files in and out of Beocat&amp;lt;/b&amp;gt;&lt;br /&gt;
* [[Globus]] - Instructions on transferring files using [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''.&lt;br /&gt;
* [[LinuxBasics]] - Under the 'Transferring files (SCP or SFTP)' section, we have information regarding SCP and SFTP implementation.&lt;br /&gt;
* [[OpenOnDemand]] - We offer GUI based file management through OpenOnDemand&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] - We also offer the ability to transfer data to and from OneDrive&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] or you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address or through TDX and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you email us at beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket], please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=GalaxyDocs&amp;diff=1130</id>
		<title>GalaxyDocs</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=GalaxyDocs&amp;diff=1130"/>
		<updated>2025-06-26T13:40:27Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Tool Requests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Galaxy? ==&lt;br /&gt;
[https://galaxyproject.org/ Galaxy] is a scientific workflow, data integration, and data and analysis persistence and publishing platform that aims to make computational biology accessible to research scientists that do not have computer programming or systems administration experience.&lt;br /&gt;
&lt;br /&gt;
== How do I access Galaxy? == &lt;br /&gt;
Access to Beocat's local instance of Galaxy is easy. Simply navigate to [https://galaxy.beocat.ksu.edu/ https://galaxy.beocat.ksu.edu/] and sign in if prompted using the Keycloak login. This will use your Beocat EID and Password to login, you will also be prompted to authenticate against DUO.&lt;br /&gt;
&lt;br /&gt;
Please note that this utilizes Beocat's /bulk directory. This is a billed directory, and as such, if usage in your respective upload directory for Galaxy exceeds the billing threshold (1T), we will contact you regarding remediation.&lt;br /&gt;
&lt;br /&gt;
== Upload Larger Files to Galaxy ==&lt;br /&gt;
Have some larger files that need to be uploaded to Galaxy? We provide some documentation on how to upload files directly to Beocat and then import them to Galaxy for use. It can be found here: [[Galaxy_File_Upload| Galaxy File Upload]]&lt;br /&gt;
&lt;br /&gt;
== How do I use Galaxy? ==&lt;br /&gt;
&lt;br /&gt;
After accessing our local instance of Galaxy you should have access to all installed tools which will be in the tool panel on the left-hand side of your home screen on Galaxy. It should look like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Galaxy Toolbox.png|Tool Panel]]&lt;br /&gt;
&lt;br /&gt;
Each category (Get Data, Send Data, MetaGenomics Tools, etc) is expandable reveal subcategories that help sort out many installed tools. Each tool will be placed under its respective category once it is installed.&lt;br /&gt;
&lt;br /&gt;
From here you can select a tool to use and submit to the Slurm cluster. After opening a tool you will be given options (if available) to configure the tool, including its inputs, what the tool will do (within its constraints) outputs, and what compute resources the tool will submit with. This means that you can specify the resources given to slurm jobs by expanding the drop-down menu &amp;quot;Job Resource Paramters&amp;quot; and selecting &amp;quot;Specify Job resource parameters&amp;quot; which will show the following options to modify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;Processors&amp;lt;/B&amp;gt;: Number of processing cores, 'ppn' value (1-128) this is equivalent to slurms &amp;quot;--cpus-per-task&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;B&amp;gt;Memory&amp;lt;/B&amp;gt;: Memory size in gigabytes, 'pmem' value (1-1500). Note that the job scheduler uses --mem-per-cpu to allocate memory for your slurm job. This mean the number given for memory will be multiplied by the Processors count from above. I.e 2 Processors with 5 Memory will be 10GB of memory.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;B&amp;gt;Priority Queue&amp;lt;/B&amp;gt;: If you have access to a priority queue, and would like to use it, enter the partition name here.&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;B&amp;gt;Runtime&amp;lt;/B&amp;gt;: How long you want your job to run for in hours. &lt;br /&gt;
&lt;br /&gt;
If you fail to switch your job resource parameters, your job will submit with the default resources. This means it will submit with 5Gb of Memory and 1 CPU with 1 hour of Runtime and no Priority queue.&lt;br /&gt;
&lt;br /&gt;
=== Canceling a running job ===&lt;br /&gt;
While Galaxy does submit to slurm, you will not be able to cancel the job in the same way you typically do. With Galaxy, to cancel your upcoming/currently running job simply press the trash can icon next the name of your job.&lt;br /&gt;
&lt;br /&gt;
== Tool Requests ==&lt;br /&gt;
If you are missing a specific tool and would like to have it added to Galaxy, please contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] with a link to the tool. Additionally, you can browse through Galaxy's own [https://toolshed.g2.bx.psu.edu/ toolshed] to make a recommendation.&lt;br /&gt;
&lt;br /&gt;
== Data Management ==&lt;br /&gt;
&lt;br /&gt;
Galaxy follows the typical costs for bulk data storage, as Galaxy utilizes /bulk/galaxy for storage. Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Billing starting at 1TB of usage. Users can easily see how much data they are utilizing in galaxy by checking the top right corner of the 'home' page of galaxy. This will say &amp;quot;Using ####MB/GB/TB&amp;quot;, above your histories.&lt;br /&gt;
&lt;br /&gt;
[[File:Galaxy_Data_usage_example.png|Usage Data]]&lt;br /&gt;
&lt;br /&gt;
Clicking on this usage will bring you to a storage dashboard where you can easily manage your files and derelict dataset histories.&lt;br /&gt;
&lt;br /&gt;
[[File:Storage_dashboard.png|Storage Dashboard]]&lt;br /&gt;
&lt;br /&gt;
== Requesting Help ==&lt;br /&gt;
To request help with [https://galaxy.beocat.ksu.edu/ https://galaxy.beocat.ksu.edu/], please contact &amp;lt;B&amp;gt;beocathelp@ksu.edu&amp;lt;/B&amp;gt;,or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket].&amp;lt;br&amp;gt;&lt;br /&gt;
When requesting help, it is best to give as much information as possible so that we may solve your issue in a timely manner.&lt;br /&gt;
&lt;br /&gt;
== Acknowledgements ==&lt;br /&gt;
Beocat's installation of UseGalaxy is funded through K-INBRE with an Institutional Development Award (IDeA) from the National Institute of General Medical Sciences of the National Institutes of Health under grant number P20GM103418. &lt;br /&gt;
&lt;br /&gt;
This initiative was started through the Data Science Core group to bring easy to use GUI-based computational biology research to students and researchers at Kansas State University through Beocat.&lt;br /&gt;
&lt;br /&gt;
Additional information on K-INBRE can be found [https://www.k-inbre.org/pages/k-inbre_about_bio-core.html here]&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=RSICC&amp;diff=1129</id>
		<title>RSICC</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=RSICC&amp;diff=1129"/>
		<updated>2025-06-26T13:40:02Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== RSICC Codes ==&lt;br /&gt;
RSICC requires administrators of the system to be licensed for every code (and version) a user wishes to be on the system.&lt;br /&gt;
&lt;br /&gt;
Let us know which RSICC software you'd like to run on Beocat. We will have to become licensed for it before it can be put on the system.&lt;br /&gt;
&lt;br /&gt;
If it is an RSICC software that we've already obtained, you *must* provide proof of a license by providing us with an electronic copy of the email message from RSICC’s Request History tool or a copy of the user’s License Agreement.  The email generated by RSICC’s Request History tool lists all of the software the individual has a license to use.  The Request History link is available on RSICC’s Customer Service webpage, https://rsicc.ornl.gov/CustomerService.aspx Contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] or send those e-mails to beocat@cs.ksu.edu&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=1128</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=1128"/>
		<updated>2025-06-23T20:19:58Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Common Storage For Projects */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If your account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
===== OpenSSH =====&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you would like to put this in your ~/.ssh/config file, it will send the environment variable whenever it is set to Beocat upon connection:&lt;br /&gt;
 Host headnode.beocat.ksu.edu&lt;br /&gt;
     HostName headnode.beocat.ksu.edu&lt;br /&gt;
     User YOUR_EID_GOES_HERE&lt;br /&gt;
     SendEnv DUO_PASSCODE&lt;br /&gt;
&lt;br /&gt;
From there you would simply do the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
export DUO_PASSCODE=push&lt;br /&gt;
ssh headnode.beocat.ksu.edu&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== PuTTY =====&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter '''&amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;''' beside ''Variable'' and '''&amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt;''' beside ''Value''. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
===== MobaXTerm =====&lt;br /&gt;
There doesn't seem to be a way to send an environment variable in MobaXTerm, so you won't be able to set DUO_PASSCODE to an actual valid temporary key. To get MobaXterm to push automatically, you can edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Common issues ====&lt;br /&gt;
; Duo Pushes sometimes don't show up in a timely manner. &lt;br /&gt;
: If you open the Duo MFA application on your smart device when you're expecting an authentication challenge, the prompts seem to show up faster.&lt;br /&gt;
; MobaXTerm has excessive prompts for managing files.&lt;br /&gt;
: MobaXTerm has a sidebar browser for managing your files. Unfortunately, that sidebar browser initiates another SSH connection for every file transfer, which triggers a Duo push that you need to approve. MobaXTerm's dedicated SFTP Session doesn't have this same issue, it initiates a connection, keeps it open and re-uses it as needed, so you will have much fewer Duo approvals to respond to. If you choose to use the dedicated SFTP Session, you might consider disabling the sidebar file browser. &amp;quot;Advanced SSH settings&amp;quot; -&amp;gt; &amp;quot;SSH-browser type&amp;quot; -&amp;gt; &amp;quot;None&amp;quot;&lt;br /&gt;
; WinSCP has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, WinSCP will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable [https://winscp.net/eng/docs/ui_pref_resume reconnections during idle periods] if you do not wish be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has auto-reconnect enabled by default.&lt;br /&gt;
: Auto-reconnect is a useful function when actively transferring files, but if you have an idle session and the connection drops it will reconnect, sending you a Duo MFA prompt. If you don't approve it soon enough, FileZilla will attempt it again. Miss enough prompts and Duo will lock your account. It may be best to disable timeouts and/or connection retries under the &amp;lt;tt&amp;gt;Edit -&amp;gt; Settings -&amp;gt; Connection&amp;lt;/tt&amp;gt; menu if you do not wish to be locked out of all services at K-State using Duo.&lt;br /&gt;
; FileZilla has excessive prompts for managing files.&lt;br /&gt;
: Filezilla opens one connection for browsing the system. Transferring files opens 1-4 additional connections when the transfers start. Once they finish, those connections disconnect. If you start additional transfers, new connections will be opened. Every one of those connections must be approved through Duo MFA on your smart device. You can adjust the number of connections that FileZilla opens for transfers if you like. &amp;lt;tt&amp;gt;File -&amp;gt; Site Manager -&amp;gt; (choose the site you're changing) -&amp;gt; Transfer Settings -&amp;gt; Limit number of simultaneous connections&amp;lt;/tt&amp;gt;.&lt;br /&gt;
: Another option is to disable processing the transfer queue, add the things to it you want to transfer and then re-enable the transfer queue. Then at least it will re-use the connections until the queue is empty.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
; Fortran&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
; C/C++&lt;br /&gt;
: &amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /homes or /bulk, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || XFS || Good for I/O intensive jobs. Unique per job, culled with the job finishes.&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Help! when I use sbatch I get an error about line breaks ==&lt;br /&gt;
Beocat is a Linux system. Operating Systems use certain patterns of characters to indicate line breaks in their files. Linux and operating systems like it use '\n' as their line break character. Windows uses '\r\n' for their line breaks.&lt;br /&gt;
&lt;br /&gt;
If you're getting an error that looks like this:&lt;br /&gt;
 sbatch: error: Batch script contains DOS line breaks (\r\n)&lt;br /&gt;
 sbatch: error: instead of expected UNIX line breaks (\n).&lt;br /&gt;
&lt;br /&gt;
It means that your script is using the windows line endings. You can convert it with the &amp;lt;tt&amp;gt;dos2unix&amp;lt;/tt&amp;gt; command&lt;br /&gt;
 dos2unix myscript.sh&lt;br /&gt;
&lt;br /&gt;
It would probably be beneficial for your editor to save the files with UNIX line breaks in the future.&lt;br /&gt;
* Visual Studio Code -- “Text Editor” &amp;gt; “Files” &amp;gt; “Eol”&lt;br /&gt;
* Notepad++ -- &amp;quot;Edit&amp;quot; &amp;gt; &amp;quot;EOL Conversion&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Help! when logging into OnDemand I get a '400 Bad request' message ==&lt;br /&gt;
Unfortunately, there are some known issues with OnDemand and how it handles some of the complexities behind the scenes. This involves browser cookies that (occasionally) get too large and make it so you get these messages upon login.&lt;br /&gt;
&lt;br /&gt;
The only work around is to clear your browser cookies (although you can limit it to simply clearing the ksu.edu ones).&lt;br /&gt;
&lt;br /&gt;
Details for specific browsers are below&lt;br /&gt;
&lt;br /&gt;
* [https://support.mozilla.org/en-US/kb/clear-cookies-and-site-data-firefox Firefox]&lt;br /&gt;
* [https://support.microsoft.com/en-us/microsoft-edge/delete-cookies-in-microsoft-edge-63947406-40ac-c3b8-57b9-2a946a29ae09 Edge]&lt;br /&gt;
* [https://support.google.com/chrome/answer/95647?sjid=1537101898131489753-NA#zippy=%2Cdelete-cookies-from-a-site Chrome]&lt;br /&gt;
* [https://support.apple.com/guide/safari/manage-cookies-sfri11471/mac Safari]&lt;br /&gt;
* If you are using some other browser, I would recommend searching google for &amp;lt;tt&amp;gt;$browsername clear site cookies&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done using our [[Group Management]] application.&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands. As with other permissions, the individuals will need access through every level of the directory hierarchy. [[LinuxBasics#Access_Control_Lists|It may be best to review our more in-depth topic on Access Control Lists.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=SlurmBasics&amp;diff=1127</id>
		<title>SlurmBasics</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=SlurmBasics&amp;diff=1127"/>
		<updated>2025-06-23T20:19:13Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== The Rocky/Slurm nodes ==&lt;br /&gt;
&lt;br /&gt;
We have converted Beocat from CentOS Linux to Rocky Linux on April 1st of 2024.  Any applications or libraries from the old system must be recompiled.   &lt;br /&gt;
&lt;br /&gt;
=== Using Modules ===&lt;br /&gt;
&lt;br /&gt;
If you're using a common code that others may also be using, we may already have it compiled in a module.  You can list the modules available and load an application as in the example below for Vasp.&lt;br /&gt;
&lt;br /&gt;
 eos&amp;gt;  &amp;lt;B&amp;gt;module avail&amp;lt;/B&amp;gt;&lt;br /&gt;
 eos&amp;gt;  &amp;lt;B&amp;gt;module load GROMACS&amp;lt;/B&amp;gt;&lt;br /&gt;
 eos&amp;gt;  &amp;lt;B&amp;gt;module list&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When a module gets loaded, all the necessary libraries are also loaded and the paths to the libraries and executables are automatically set up.  Loading GROMACS for example also loads the OpenMPI library needed to run it and adds the path to the MPI commands and Grimaces executables.   To see how the path is set up, try executing &amp;lt;B&amp;gt;&amp;lt;I&amp;gt;which gmx&amp;lt;/I&amp;gt;&amp;lt;/B&amp;gt;.  The module system allows you to easily switch between different version of applications, libraries, or languages as well.&lt;br /&gt;
&lt;br /&gt;
If you are using a custom code or one that is not installed in a module, you'll need to recompile it yourself.  This process is easier under CentOS as some of the work just involves loading the necessary set of modules.  The first step is to decide whether to use the Intel compiler toolchain or the GNU toolchain, each of which includes the compilers and other math libraries.  The module commands for each are below, and you can load these automatically when you log in by adding one of these module load statements to your .bashrc file.  See &amp;lt;B&amp;gt;/homes/daveturner/.bashrc&amp;lt;/B&amp;gt; as an example, where I put the module load statements .&lt;br /&gt;
&lt;br /&gt;
To load the Intel compiler tool chain including the Intel Math Kernel Library (and OpenMPI):&lt;br /&gt;
 icr-helios&amp;gt;  &amp;lt;B&amp;gt;module load iomkl&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load the GNU compiler tool chain including OpenMPI, OpenBLAS, FFTW, and ScalaPack load foss (free open source software):&lt;br /&gt;
 icr-helios&amp;gt;  &amp;lt;B&amp;gt;module load foss&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modules provide an easy way to set up the compilers and libraries you may need to compile your code.  Beyond that there are many different ways to compile codes so you'll just need to follow the directions.  If you need help you can always email us at &amp;lt;B&amp;gt;beocathelp@ksu.edu&amp;lt;/B&amp;gt; or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket].&lt;br /&gt;
&lt;br /&gt;
=== Submitting jobs to Slurm ===&lt;br /&gt;
&lt;br /&gt;
You can submit your job script using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
 icr-helios&amp;gt; &amp;lt;B&amp;gt;sbatch sbatch_script.sh&amp;lt;/B&amp;gt;&lt;br /&gt;
 icr-helios&amp;gt; &amp;lt;B&amp;gt;kstat  --me&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will submit the script and show you a list of your jobs that are running and the jobs you have in the queue.  By default the output for each job will go into a &amp;lt;B&amp;gt;slurm-###.out&amp;lt;/B&amp;gt; file where ### is the job ID number.  If you need to kill a job, you can use the &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; command with the job ID number.&lt;br /&gt;
&lt;br /&gt;
== Submitting your first job ==&lt;br /&gt;
To submit a job to run under Slurm, we use the &amp;lt;B&amp;gt;&amp;lt;I&amp;gt;sbatch&amp;lt;/I&amp;gt;&amp;lt;/B&amp;gt; (submit batch) command.  The scheduler finds the optimum place for your job to run. With over 300 nodes and 7500 cores to schedule, as well as differing priorities, hardware, and individual resources, the scheduler's job is not trivial and it can take some time for a job to start even when there are empty nodes available.&lt;br /&gt;
&lt;br /&gt;
There are a few things you'll need to know before running sbatch.&lt;br /&gt;
* How many cores you need. Note that unless your program is created to use multiple cores (called &amp;quot;threading&amp;quot;), asking for more cores will not speed up your job. This is a common misperception. '''Beocat will not magically make your program use multiple cores!''' For this reason the default is 1 core.&lt;br /&gt;
* How much time you need. Many users when beginning to use Beocat neglect to specify a time requirement. The default is one hour, and we get asked why their job died after one hour. We usually point them to the [[FAQ]].&lt;br /&gt;
* How much memory you need. The default is 1 GB. If your job uses significantly more than you ask, your job will be killed off.&lt;br /&gt;
* Any advanced options. See the [[AdvancedSlurm]] page for these requests. For our basic examples here, we will ignore these.&lt;br /&gt;
&lt;br /&gt;
So let's now create a small script to test our ability to submit jobs. Create the following file (either by copying it to Beocat or by editing a text file and we'll name it &amp;lt;code&amp;gt;myhost.sh&amp;lt;/code&amp;gt;. Both of these methods are documented on our [[LinuxBasics]] page.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/sh&lt;br /&gt;
hostname&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Be sure to make it executable&lt;br /&gt;
 chmod u+x myhost.sh&lt;br /&gt;
&lt;br /&gt;
So, now lets submit it as a job and see what happens. Here I'm going to use five options&lt;br /&gt;
* &amp;lt;code&amp;gt;--mem-per-cpu=&amp;lt;/code&amp;gt; tells how much memory I need. In my example, I'm using our system minimum of 512 MB, which is more than enough. Note that your memory request is '''per core''', which doesn't make much difference for this example, but will as you submit more complex jobs.&lt;br /&gt;
* &amp;lt;code&amp;gt;--time=&amp;lt;/code&amp;gt; tells how much runtime I need. This can be in the form of &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. This is a very short job, so 1 minute should be plenty. This can't be changed after the job is started please make sure you have requested a sufficient amount of time.&lt;br /&gt;
* &amp;lt;code&amp;gt;--nodes=1&amp;lt;/code&amp;gt; tells Slurm that this must be run on one machine. The [[AdvancedSlurm]] page has much more on the &amp;quot;nodes&amp;quot; switch.&lt;br /&gt;
* &amp;lt;code&amp;gt;--ntasks-per-node=16 &amp;lt;/code&amp;gt; Request 16 cores on each node.&lt;br /&gt;
* &amp;lt;code&amp;gt;--constraint=moles&amp;lt;/code&amp;gt; Request to only run on the Mole class of compute nodes.&lt;br /&gt;
&lt;br /&gt;
 % '''ls'''&lt;br /&gt;
 myhost.sh&lt;br /&gt;
 % '''sbatch --time=1 --mem-per-cpu=512M --cpus-per-task=1 --ntasks=1 --nodes=1 ./myhost.sh'''&lt;br /&gt;
 salloc: Granted job allocation 1483446&lt;br /&gt;
&lt;br /&gt;
Since this is such a small job, it is likely to be scheduled almost immediately, so a minute or so later, I now see&lt;br /&gt;
 % '''ls'''&lt;br /&gt;
 myhost.sh&lt;br /&gt;
 slurm-1483446.out&lt;br /&gt;
&lt;br /&gt;
 % '''cat slurm-1483446.out'''&lt;br /&gt;
 mage03&lt;br /&gt;
&lt;br /&gt;
== Monitoring Your Job ==&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;B&amp;gt;kstat&amp;lt;/B&amp;gt; perl script has been developed at K-State to provide you with all the available information about your jobs on Beocat.  &amp;lt;B&amp;gt;kstat --help&amp;lt;/B&amp;gt; will give you a full description of how to use it.&lt;br /&gt;
&lt;br /&gt;
 Eos&amp;gt;  kstat --help&lt;br /&gt;
  &lt;br /&gt;
  USAGE: kstat [-q] [-c] [-g] [-l] [-u user] [-p NaMD] [-j 1234567] [--part partition]&lt;br /&gt;
         kstat alone dumps all info except for the core summaries&lt;br /&gt;
         choose -q -c for only specific info on queued or core summaries.&lt;br /&gt;
         then specify any searchables for the user, program name, or job id&lt;br /&gt;
  &lt;br /&gt;
  kstat                 info on running and queued jobs&lt;br /&gt;
  kstat -h              list host info only, no jobs&lt;br /&gt;
  kstat -q              info on the queued jobs only&lt;br /&gt;
  kstat -c              core usage for each user&lt;br /&gt;
  kstat -d #            show jobs run in the last # days&lt;br /&gt;
                        Memory per node - used/allocated/requested&lt;br /&gt;
                        Red is close to or over requested amount&lt;br /&gt;
                        Yellow is under utilized for large jobs&lt;br /&gt;
  kstat -g              Only show GPU nodes&lt;br /&gt;
  kstat -o Turner       Only show info for a given owner&lt;br /&gt;
  kstat -o CS_HPC          Same but sub _ for spaces&lt;br /&gt;
  kstat -l              long list - node features and performance&lt;br /&gt;
                        Node hardware and node CPU usage&lt;br /&gt;
                        job nodelist and switchlist&lt;br /&gt;
                        job current and max memory&lt;br /&gt;
                        job CPU utilizations&lt;br /&gt;
  kstat -u daveturner   job info for one user only&lt;br /&gt;
  kstat --me            job info for my jobs only&lt;br /&gt;
  kstat -j 1234567      info on a given job id&lt;br /&gt;
  kstat --osg           show OSG background jobs also&lt;br /&gt;
  kstat --nocolor       do not use any color&lt;br /&gt;
  kstat --name          display full names instead of eIDs&lt;br /&gt;
  &lt;br /&gt;
  ---------------- Graphs and Tables ---------------------------------------&lt;br /&gt;
  Specify graph/table,  CPU or GPU or host, usage or memory, and optional time&lt;br /&gt;
  kstat --graph-cpu-memory #      gnuplot CPU memory for job #&lt;br /&gt;
  kstat --table-gpu-usage-5min #  GPU usage table every 5 min for job #&lt;br /&gt;
  kstat --table-cpu-60min #       CPU usage, memory, swap table every 60 min for job #&lt;br /&gt;
  kstat --table-node [nodename]   cores, load, CPU usage, memory table for a node&lt;br /&gt;
  &lt;br /&gt;
  --------------------------------------------------------------------------&lt;br /&gt;
    Multi-node jobs are highlighted in Magenta&lt;br /&gt;
       kstat -l also provides a node list and switch list&lt;br /&gt;
       highlighted in Yellow when nodes are spread across multiple switches&lt;br /&gt;
    Run time is colorized yellow then red for jobs nearing their time limit&lt;br /&gt;
    Queue time is colorized yellow then red for jobs waiting longer times&lt;br /&gt;
  --------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
kstat can be used to give you a summary of your jobs that are running and in the queue:&lt;br /&gt;
 &amp;lt;B&amp;gt;Eos&amp;gt;  kstat --me&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;&lt;br /&gt;
&amp;lt;font color=Brown&amp;gt;Hero43 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=Blue&amp;gt;24 of 24 cores &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;Load 23.4 / 24 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=Red&amp;gt;495.3 / 512 GB used&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;font color=lightgreen&amp;gt;daveturner &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;unafold &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 1234567 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=cyan&amp;gt;1 core &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;running &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 4gb req &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 0 d  5 h 35 m &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;daveturner &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;octopus &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 1234568 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=cyan&amp;gt;16 core &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;running &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt; 128gb req &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 8 d 15 h 42 m &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=green&amp;gt; ##################################   BeoCat Queue    ################################### &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;daveturner &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;NetPIPE &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 1234569 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=cyan&amp;gt;2 core &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt; PD &amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 2h &amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 4gb req &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 0 d 1 h 2 m &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;kstat&amp;lt;/b&amp;gt; produces a separate line for each host.  Use &amp;lt;b&amp;gt;kstat -h&amp;lt;/b&amp;gt; to see information on all hosts without the jobs.&lt;br /&gt;
For the example above we are listing our jobs and the hosts they are on.&lt;br /&gt;
&lt;br /&gt;
Core usage - yellow for empty, red for empty on owned nodes, cyan for partially used, blue for all cores used.&amp;lt;BR&amp;gt;&lt;br /&gt;
Load level - yellow or yellow background indicates the node is being inefficiently used.  Red just means more threads than cores.&amp;lt;br&amp;gt;&lt;br /&gt;
Memory usage - yellow or red means most memory is used.&amp;lt;BR&amp;gt;&lt;br /&gt;
If the node is owned the group name will be in orange on the right.  Killable jobs can still be run on those nodes.&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each job line will contain the username, program name, job ID, number of cores, the status which may be colored red for killable jobs, &lt;br /&gt;
the maximum memory used or memory requested, and the amount of time the job has run.  &lt;br /&gt;
Jobs in the queue may contain information on the requested memory and run time, priority access, constraints, and&lt;br /&gt;
how long the job has been in the queue.&lt;br /&gt;
In this case, I have 2 jobs running on Hero43.  &amp;lt;i&amp;gt;unafold&amp;lt;/i&amp;gt; is using 1 core while &amp;lt;i&amp;gt;octopus&amp;lt;/i&amp;gt; is using 16 cores.  Slurm did not provide&lt;br /&gt;
any information on the actual memory use so the memory request is reported  &lt;br /&gt;
&lt;br /&gt;
=== Detailed information about a single job ===&lt;br /&gt;
&lt;br /&gt;
kstat can provide a great deal of information on a particular job including a very rough estimate of when it will run.  This time is a worst case scenario as this will&lt;br /&gt;
be adapted as other jobs finish early.  This is a good way to check for job submission problems before contacting us.  kstat colorizes the more important&lt;br /&gt;
information to make it easier to identify.&lt;br /&gt;
&lt;br /&gt;
 Eos&amp;gt;  kstat -j 157054&lt;br /&gt;
 &lt;br /&gt;
 ##################################   Beocat Queue    ###################################&lt;br /&gt;
  daveturner  netpipe     157054   64 cores  PD       dwarves fabric  CS HPC     8gb req   0 d  0 h  0 m&lt;br /&gt;
 &lt;br /&gt;
 JobId 157054  Job Name  netpipe&lt;br /&gt;
   UserId=daveturner GroupId=daveturner_users(2117) MCS_label=N/A&lt;br /&gt;
   Priority=11112 Nice=0 Account=ksu-cis-hpc QOS=normal&lt;br /&gt;
   Status=PENDING Reason=Resources Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0&lt;br /&gt;
   RunTime=00:00:00 TimeLimit=00:40:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2018-02-02T18:18:31 EligibleTime=2018-02-02T18:18:31&lt;br /&gt;
   Estimated Start Time is 2018-02-03T06:17:49 EndTime=2018-02-03T06:57:49 Deadline=N/A&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partitions killable.q,ksu-cis-hpc.q AllocNode:Sid=eos:1761&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=(null) SchedNodeList=dwarf[01-02]&lt;br /&gt;
   NumNodes=2-2 NumCPUs=64 NumTasks=64 CPUs/Task=1 ReqB:S:C:T=0:0:*:*&lt;br /&gt;
   TRES 2 nodes 64 cores 8192  mem gres/fabric 2&lt;br /&gt;
   Socks/Node=* NtasksPerN:B:S:C=32:0:*:* CoreSpec=*&lt;br /&gt;
   MinCPUsNode=32 MinMemoryNode=4G MinTmpDiskNode=0&lt;br /&gt;
   Constraint=dwarves DelayBoot=00:00:00&lt;br /&gt;
   Gres=fabric Reservation=(null)&lt;br /&gt;
   OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Slurm script  /homes/daveturner/perf/NetPIPE-5.x/sb.np&lt;br /&gt;
   WorkDir=/homes/daveturner/perf/NetPIPE-5.x&lt;br /&gt;
   StdErr=/homes/daveturner/perf/NetPIPE-5.x/0.o157054&lt;br /&gt;
   StdIn=/dev/null&lt;br /&gt;
   StdOut=/homes/daveturner/perf/NetPIPE-5.x/0.o157054&lt;br /&gt;
   Switches=1@00:05:00&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=netpipe&lt;br /&gt;
#SBATCH -o 0.o%j&lt;br /&gt;
#SBATCH --time=0:40:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --switches=1&lt;br /&gt;
#SBATCH --nodes=2&lt;br /&gt;
#SBATCH --constraint=dwarves&lt;br /&gt;
#SBATCH --ntasks-per-node=32&lt;br /&gt;
#SBATCH --gres=fabric:roce:1&lt;br /&gt;
&lt;br /&gt;
host=`echo $SLURM_JOB_NODELIST | sed s/[^a-z0-9]/\ /g | cut -f 1 -d ' '`&lt;br /&gt;
nprocs=$SLURM_NTASKS&lt;br /&gt;
openmpi_hostfile.pl $SLURM_JOB_NODELIST 1 hf.$host&lt;br /&gt;
opts=&amp;quot;--printhostnames --quick --pert 3&amp;quot;&lt;br /&gt;
&lt;br /&gt;
echo &amp;quot;*******************************************************************&amp;quot;&lt;br /&gt;
echo &amp;quot;Running on $SLURM_NNODES nodes $nprocs cores on nodes $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
echo &amp;quot;*******************************************************************&amp;quot;&lt;br /&gt;
&lt;br /&gt;
mpirun -np 2 --hostfile hf.$host NPmpi $opts -o np.${host}.mpi&lt;br /&gt;
mpirun -np 2 --hostfile hf.$host NPmpi $opts -o np.${host}.mpi.bi --async --bidir&lt;br /&gt;
mpirun -np $nprocs NPmpi $opts -o np.${host}.mpi$nprocs --async --bidir&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Completed jobs and memory usage ===&lt;br /&gt;
&lt;br /&gt;
 kstat -d #&lt;br /&gt;
&lt;br /&gt;
This will provide information on the jobs you have currently running and those that have completed&lt;br /&gt;
in the last '#' days.  This is currently the only reliable way to get the memory used per node for your job.&lt;br /&gt;
This also provides information on whether the job completed normally, was canceled with &amp;lt;I&amp;gt;scancel&amp;lt;/I&amp;gt;, &lt;br /&gt;
timed out, or was killed because it exceeded its memory request.&lt;br /&gt;
&lt;br /&gt;
 Eos&amp;gt;  kstat -d 10&lt;br /&gt;
&lt;br /&gt;
 ###########################  sacct -u daveturner  for 10 days  ###########################&lt;br /&gt;
                                      max gb used on a node /   gb requested per node&lt;br /&gt;
  193037   ADF         dwarf43           1 n  32 c   30.46gb/100gb    05:15:34  COMPLETED&lt;br /&gt;
  193289   ADF         dwarf33           1 n  32 c   26.42gb/100gb    00:50:43  CANCELLED&lt;br /&gt;
  195171   ADF         dwarf44           1 n  32 c   56.81gb/120gb    14:43:35  COMPLETED&lt;br /&gt;
  209518   matlab      dwarf36           1 n   1 c    0.00gb/  4gb    00:00:02  FAILED&lt;br /&gt;
&lt;br /&gt;
=== Summary of core usage ===&lt;br /&gt;
&lt;br /&gt;
kstat can also provide a listing of the core usage and cores requested for each user.&lt;br /&gt;
 Eos&amp;gt;  kstat -c&lt;br /&gt;
 &lt;br /&gt;
 ##############################   Core usage    ###############################&lt;br /&gt;
   antariksh       1512 cores   %25.1 used     41528 cores queued&lt;br /&gt;
   bahadori         432 cores   % 7.2 used        80 cores queued&lt;br /&gt;
   eegoetz            0 cores   % 0.0 used         2 cores queued&lt;br /&gt;
   fahrialkan        24 cores   % 0.4 used        32 cores queued&lt;br /&gt;
   gowri             66 cores   % 1.1 used        32 cores queued&lt;br /&gt;
   jeffcomer        160 cores   % 2.7 used         0 cores queued&lt;br /&gt;
   ldcoates12        80 cores   % 1.3 used       112 cores queued&lt;br /&gt;
   lukesteg         464 cores   % 7.7 used         0 cores queued&lt;br /&gt;
   mike5454        1060 cores   %17.6 used       852 cores queued&lt;br /&gt;
   nilusha          344 cores   % 5.7 used         0 cores queued&lt;br /&gt;
   nnshan2014       136 cores   % 2.3 used         0 cores queued&lt;br /&gt;
   ploetz           264 cores   % 4.4 used        60 cores queued&lt;br /&gt;
   sadish           812 cores   %13.5 used         0 cores queued&lt;br /&gt;
   sandung           72 cores   % 1.2 used        56 cores queued&lt;br /&gt;
   zhiguang          80 cores   % 1.3 used       688 cores queued&lt;br /&gt;
&lt;br /&gt;
=== Producing memory and CPU utilization tables and graphs ===&lt;br /&gt;
&lt;br /&gt;
kstat can now produce tables or graphs for the memory or CPU utilization&lt;br /&gt;
for a job.  In order to view graphs you must set up X11 forwarding on your&lt;br /&gt;
ssh connection by using the -X parameter.&lt;br /&gt;
&lt;br /&gt;
If you want to read more, continue on to our [[AdvancedSlurm]] page.&lt;br /&gt;
&lt;br /&gt;
=== kstat is now available to download and install on other clusters ===&lt;br /&gt;
&lt;br /&gt;
https://gitlab.beocat.ksu.edu/Admin-Public/kstat&lt;br /&gt;
&lt;br /&gt;
This software has been installed and used on several clusters for many years.&lt;br /&gt;
It should be considered Beta software and may take some slight modifications&lt;br /&gt;
to install on some clusters.  Please contact the author if you want to give&lt;br /&gt;
it a try (daveturner@ksu.edu).&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=1126</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=1126"/>
		<updated>2025-06-23T20:18:45Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. 'kstat -g' will show you the GPU nodes and the jobs running on them.  To request a GPU node, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; for example to request 1 GPU for your job; if your job uses multiple nodes, the number of GPUs requested is per-node.  You can also request a given type of GPU (kstat -g -l to show types) by using &amp;lt;tt&amp;gt;--gres=gpu:geforce_gtx_1080_ti:1&amp;lt;/tt&amp;gt; for a 1080Ti GPU on the Wizards or Dwarves, &amp;lt;tt&amp;gt;--gres=gpu:quadro_gp100:1&amp;lt;/tt&amp;gt; for the P100 GPUs on Wizard20-21 that are best for 64-bit codes like Vasp.  Most of these GPU nodes are owned by various groups.  If you want access to GPU nodes and your group does not own any, we can add you to the &amp;lt;tt&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/tt&amp;gt; group that has priority on Dwarf36-39.  For more information on compiling CUDA code click on this [[CUDA]] link.&lt;br /&gt;
&lt;br /&gt;
A listing of the current types of gpus can be gathered with this command:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
scontrol show nodes | grep CfgTRES | tr ',' '\n' | awk -F '[:=]' '/gres\/gpu:/ { print $2 }' | sort -u&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
At the time of this writing, that command produces this list:&lt;br /&gt;
* geforce_gtx_1080_ti&lt;br /&gt;
* geforce_rtx_2080_ti&lt;br /&gt;
* geforce_rtx_3090&lt;br /&gt;
* l40s&lt;br /&gt;
* quadro_gp100&lt;br /&gt;
* rtx_a4000&lt;br /&gt;
* rtx_a6000&lt;br /&gt;
&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
''Intra''node jobs run on many cores in the same node. These jobs can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or any programming language that has the concept of ''threads''. Often, your program will need to know how many cores you want it to use, and many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--nodes=1 --cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
''Inter''node jobs can utilize many cores on one or more nodes. Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--nodes=''n'' --ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=10 --ntasks=100&amp;lt;/tt&amp;gt; will give you a total of 100 cores across 10 nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=avx2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## There is one strict rule for guaranteeing Slurm reads all of your options:&lt;br /&gt;
## Do not put *any* lines above your resource requests that aren't either:&lt;br /&gt;
##    1) blank. (no other characters)&lt;br /&gt;
##    2) comments (lines must begin with '#')&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres=gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]&lt;br /&gt;
## to see how we can assist in getting your &lt;br /&gt;
## job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance.  Those needing more storage may purchase /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up. The /fastscratch file system is a zfs host with lots of NVME drives provide much faster&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to /fastscratch, the local disk on each node, or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in a purchased /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Due to the cost, directories will be provided when we are contacted and provided with payment information.&lt;br /&gt;
&lt;br /&gt;
===Fast Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /fastscratch file system is faster than /bulk or /homes.&lt;br /&gt;
In order to use fastscratch, you first need to make a directory for yourself.  &lt;br /&gt;
Fast Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the fastscratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /fastscratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  Use 'kstat -l -h' to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$USER:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -m u:$STUDENT_USERNAME:rwX -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&lt;br /&gt;
# Once the move is complete, the Supervisor should limit the permissions for the directory again by removing the student's access:&lt;br /&gt;
chown $USER: -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -d -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
setfacl -x u:$STUDENT_USERNAME -R /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
In the past, Beocat users have been allowed to keep their&lt;br /&gt;
/homes and /bulk directories open so that any other user could&lt;br /&gt;
access files.  In order to bring Beocat into alignment with&lt;br /&gt;
State of Kansas regulations and industry norms, all users must now have their /homes /bulk /scratch and /fastscratch directories&lt;br /&gt;
locked down from other users, but can still share files and directories within their group or with individual users&lt;br /&gt;
using group and individual ACLs (Access Control Lists) which will be explained below.&lt;br /&gt;
Beocat staff will be exempted from this&lt;br /&gt;
policy as we need to work freely with all users and will manage our&lt;br /&gt;
subdirectories to minimize access.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory with the setacls script===&lt;br /&gt;
&lt;br /&gt;
If you do not wish to share files or directories with other users, you do not need to do anything&lt;br /&gt;
as rwx access to others has already been removed.&lt;br /&gt;
If you want to share files or directories you can either use the '''setacls''' script or configure&lt;br /&gt;
the ACLs (Access Control Lists) manually.&lt;br /&gt;
&lt;br /&gt;
The '''setacls -h''' will show how to use the script.&lt;br /&gt;
  &lt;br /&gt;
  Eos: setacls -h&lt;br /&gt;
  setacls [-r] [-w] [-g group] [-u user] -d /full/path/to/directory&lt;br /&gt;
  Execute pemission will always be applied, you may also choose r or w&lt;br /&gt;
  Must specify at least one group or user&lt;br /&gt;
  Must specify at least one directory, and it must be the full path&lt;br /&gt;
  Example: setacls -r -g ksu-cis-hpc -u mozes -d /homes/daveturner/shared_dir&lt;br /&gt;
&lt;br /&gt;
You can specify the permissions to be either -r for read or -w for write or you can specify both.&lt;br /&gt;
You can provide a priority group to share with, which is the same as the group used in a --partition=&lt;br /&gt;
statement in a job submission script.  You can also specify users.&lt;br /&gt;
You can specify a file or a directory to share.  If the directory is specified then all files in that&lt;br /&gt;
directory will also be shared, and all files created in the directory laster will also be shared.&lt;br /&gt;
&lt;br /&gt;
The script will set everything up for you, telling you the commands it is executing along the way,&lt;br /&gt;
then show the resulting ACLs at the end with the '''getfacl''' command.  Below is an example of &lt;br /&gt;
sharing the directory '''test_directory''' in my /bulk/daveturner directory with Nathan.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight&amp;gt;&lt;br /&gt;
Beocat&amp;gt;  cd /bulk/daveturner&lt;br /&gt;
Beocat&amp;gt;  mkdir test_directory&lt;br /&gt;
Beocat&amp;gt;  setacls -r -w -u nathanrwells -d /bulk/daveturner/test_directory&lt;br /&gt;
&lt;br /&gt;
Opening up base directory /bulk/daveturner with X execute permission only&lt;br /&gt;
  setfacl -m u:nathanrwells:X /bulk/daveturner&lt;br /&gt;
&lt;br /&gt;
Setting Xrw for directory/file /bulk/daveturner/test_directory&lt;br /&gt;
  setfacl -m u:nathanrwells:Xrw -R /bulk/daveturner/test_directory&lt;br /&gt;
  setfacl -d -m u:nathanrwells:Xrw -R /bulk/daveturner/test_directory&lt;br /&gt;
&lt;br /&gt;
The ACLs on directory /bulk/daveturner/test_directory are set to:&lt;br /&gt;
&lt;br /&gt;
getfacl: Removing leading '/' from absolute path names&lt;br /&gt;
# file: bulk/daveturner/test_directory&lt;br /&gt;
USER   daveturner        rwx  rwx&lt;br /&gt;
user   nathanrwells      rwx  rwx&lt;br /&gt;
GROUP  daveturner_users  r-x  r-x&lt;br /&gt;
group  beocat_support    r-x  r-x&lt;br /&gt;
mask                     rwx  rwx&lt;br /&gt;
other                    ---  ---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The '''getfacl''' run by the script now shows that user '''nathanrwells''' has &lt;br /&gt;
read and write permissions to that directory and execute access to all directories&lt;br /&gt;
leading up to it.&lt;br /&gt;
&lt;br /&gt;
====Manually configuring your ACLs====&lt;br /&gt;
&lt;br /&gt;
If you want to manually configure the ACLs you can use the directions below to do what the '''setacls''' &lt;br /&gt;
script would do for you.&lt;br /&gt;
You first need to provide the minimum execute access to your /homes&lt;br /&gt;
or /bulk directory before sharing individual subdirectories.  Setting the ACL to execute only will allow those &lt;br /&gt;
in your group to get access to subdirectories while not including read access will mean they will not&lt;br /&gt;
be able to see other files or subdirectories on your main directory, but do keep in mind that they can still access them&lt;br /&gt;
so you may want to still lock them down manually.  Below is an example of how I would change my&lt;br /&gt;
/homes/daveturner directory to allow ksu-cis-hpc group execute access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:X /homes/daveturner&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your research group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share_hpc', &lt;br /&gt;
then providing access to my ksu-cis.hpc group&lt;br /&gt;
(my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q).&lt;br /&gt;
Using -R will make these changes recursively to all files and directories in that subdirectory while changing the defaults with the setfacl -d command will ensure that files and directories created&lt;br /&gt;
later will be done so with these same ACLs.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc&lt;br /&gt;
# ACLs are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share_hpc' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:ksu-cis-hpc:rwX -R share_hpc'&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
groups&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself&lt;br /&gt;
by emailing us at&lt;br /&gt;
beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]&lt;br /&gt;
.&lt;br /&gt;
If you want to share a directory with only a few people you can manage your ACLs using individual usernames&lt;br /&gt;
instead of with a group.&lt;br /&gt;
&lt;br /&gt;
You can use the '''getfacl''' command to see groups have access to a given directory.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
getfacl share_hpc&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::---&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
ACLs give you great flexibility in controlling file access at the&lt;br /&gt;
group level.  Below is a more advanced example where I set up a directory to be shared with&lt;br /&gt;
my ksu-cis-hpc group, Dan's ksu-cis-dan group, and an individual user 'mozes' who I also want&lt;br /&gt;
to have write access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir share_hpc_dan_mozes&lt;br /&gt;
# acls are used here for setting default permissions&lt;br /&gt;
setfacl -d -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -d -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
# ACLs are used here for setting actual permissions&lt;br /&gt;
setfacl -m g:ksu-cis-hpc:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m g:ksu-cis-dan:rX -R share_hpc_dan_mozes&lt;br /&gt;
setfacl -m u:mozes:rwX -R share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
getfacl share_hpc_dan_mozes&lt;br /&gt;
&lt;br /&gt;
  # file: share_hpc_dan_mozes&lt;br /&gt;
  # owner: daveturner&lt;br /&gt;
  # group: daveturner_users&lt;br /&gt;
  user::rwx&lt;br /&gt;
  user:mozes:rwx&lt;br /&gt;
  group::r-x&lt;br /&gt;
  group:ksu-cis-hpc:r-x&lt;br /&gt;
  group:ksu-cis-dan:r-x&lt;br /&gt;
  mask::r-x&lt;br /&gt;
  other::---&lt;br /&gt;
  default:user::rwx&lt;br /&gt;
  default:user:mozes:rwx&lt;br /&gt;
  default:group::r-x&lt;br /&gt;
  default:group:ksu-cis-hpc:r-x&lt;br /&gt;
  default:group:ksu-cis-dan:r-x&lt;br /&gt;
  default:mask::r-x&lt;br /&gt;
  default:other::--x&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
cd&lt;br /&gt;
mkdir public_html&lt;br /&gt;
# Opt-in to letting the webserver access your home directory:&lt;br /&gt;
setfacl -m g:public_html:x ~/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
We have a page here dedicated to [[Globus]]&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --array specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1-5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint/Restart using DMTCP ==&lt;br /&gt;
&lt;br /&gt;
DMTCP is Distributed Multi-Threaded CheckPoint software that will checkpoint your application without modification, and&lt;br /&gt;
can be set up to automatically restart your job from the last checkpoint if for example the node you are running on fails.  &lt;br /&gt;
This has been tested successfully&lt;br /&gt;
on Beocat for some scalar and OpenMP codes, but has failed on all MPI tests so far.  We would like to encourage users to&lt;br /&gt;
try DMTCP out if their non-MPI jobs run longer than 24 hours.  If you want to try this, please contact us first since we are still&lt;br /&gt;
experimenting with DMTCP.&lt;br /&gt;
&lt;br /&gt;
The sample job submission script below shows how dmtcp_launch is used to start the application, then dmtcp_restart is used to start from a checkpoint if the job has failed and been rescheduled.&lt;br /&gt;
If you are putting this in an array script, then add the Slurm array task ID to the end of the ckeckpoint directory name&lt;br /&gt;
like &amp;lt;B&amp;gt;ckptdir=ckpt-$SLURM_ARRAY_TASK_ID&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --mem=50G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS/2016.4-foss-2017beocatb-hybrid&lt;br /&gt;
  module load DMTCP&lt;br /&gt;
  module list&lt;br /&gt;
  &lt;br /&gt;
  ckptdir=ckpt&lt;br /&gt;
  mkdir -p $ckptdir&lt;br /&gt;
  export DMTCP_CHECKPOINT_DIR=$ckptdir&lt;br /&gt;
  &lt;br /&gt;
  if ! ls -1 $ckptdir | grep -c dmtcp_restart_script &amp;gt; /dev/null&lt;br /&gt;
  then&lt;br /&gt;
     echo &amp;quot;Using dmtcp_launch to start the app the first time&amp;quot;&lt;br /&gt;
     dmtcp_launch --no-coordinator mpirun -np 1 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -nsteps 50000 -ntomp 4 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  else&lt;br /&gt;
     echo &amp;quot;Using dmtcp_restart from $ckptdir to continue from a checkpoint&amp;quot;&lt;br /&gt;
     dmtcp_restart $ckptdir/*.dmtcp&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
You will need to run several tests to verify that DMTCP is working properly with your application.&lt;br /&gt;
First, run a short test without DMTCP and another with DMTCP with the checkpoint interval set to 5 minutes&lt;br /&gt;
by adding the line &amp;lt;B&amp;gt;export DMTCP_CHECKPOINT_INTERVAL=300&amp;lt;/B&amp;gt; to your script.  Then use &amp;lt;B&amp;gt;kstat -d 1&amp;lt;/B&amp;gt; to&lt;br /&gt;
check that the memory in both runs is close to the same.  Also use this information to calculate the time &lt;br /&gt;
that each checkpoint takes.  In most cases I've seen times less than a minute for checkpointing that will normally&lt;br /&gt;
be done once each hour.  If your application is taking more time, let us know.  Sometimes this can be sped up&lt;br /&gt;
by simply turning off compression by adding the line &amp;lt;B&amp;gt;export DMTCP_GZIP=0&amp;lt;/B&amp;gt;.  Make sure to remove the&lt;br /&gt;
line where you set the checkpoint interval to 300 seconds so that the default time of once per hour will be used.&lt;br /&gt;
&lt;br /&gt;
After verifying that your code completes using DMTCP and does not take significantly more time or memory, you&lt;br /&gt;
will need to start a run then &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; it after the first checkpoint, then resubmit the same script to make &lt;br /&gt;
sure that it restarts and runs to completion.  If you are working with an array job script, the last is to try a few&lt;br /&gt;
array tasks at once to make sure there is no conflict between the jobs.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;--gres=killable:1&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;scancel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, the group gets access to a &amp;quot;partition&amp;quot; giving you priority on those nodes.&lt;br /&gt;
&lt;br /&gt;
In most situations, the scheduler will automatically add those priority partitions to the jobs as submitted. You should not need to include a partition list in your job submission.&lt;br /&gt;
&lt;br /&gt;
There are currently just a few exceptions that we will not automatically add:&lt;br /&gt;
* ksu-chem-mri.q&lt;br /&gt;
* ksu-gen-gpu.q&lt;br /&gt;
* ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
If you have access to those any of the non-automatic partitions, and have need of the resources in that partition, you can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
Otherwise, you shouldn't modify the partition line at all unless you really know what you're doing.&lt;br /&gt;
&lt;br /&gt;
== Graphical Applications ==&lt;br /&gt;
Some applications are graphical and need to have some graphical input/output. We currently accomplish this with X11 forwarding or [[OpenOnDemand]]&lt;br /&gt;
=== OpenOnDemand ===&lt;br /&gt;
[[OpenOnDemand]] is likely the easier and more performant way to run a graphical application on the cluster.&lt;br /&gt;
# visit [https://ondemand.beocat.ksu.edu/ ondemand] and login with your cluster credentials.&lt;br /&gt;
# Check the &amp;quot;Interactive Apps&amp;quot; dropdown. We may have a workflow ready for you. If not choose the desktop.&lt;br /&gt;
# Select the resources you need&lt;br /&gt;
# Select launch&lt;br /&gt;
# A job is now submitted to the cluster and once the job is started you'll see a Connect button&lt;br /&gt;
# use the app as needed. If using the desktop, start your graphical application.&lt;br /&gt;
&lt;br /&gt;
=== X11 Forwarding ===&lt;br /&gt;
==== Connecting with an X11 client ====&lt;br /&gt;
===== Windows =====&lt;br /&gt;
If you are running Windows, we recommend MobaXTerm as your file/ssh manager, this is because it is one relatively simple tool to do everything. MobaXTerm also automatically connects with X11 forwarding enabled.&lt;br /&gt;
===== Linux/OSX =====&lt;br /&gt;
Both Linux and OSX can connect in an X11 forwarding mode. Linux will have all of the tools you need installed already, OSX will need [https://www.xquartz.org/ XQuartz] installed.&lt;br /&gt;
&lt;br /&gt;
Then you will need to change your 'ssh' command slightly:&lt;br /&gt;
&lt;br /&gt;
 ssh -Y eid@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
The '''-Y''' argument tells ssh to setup X11 forwarding.&lt;br /&gt;
==== Starting an Graphical job ====&lt;br /&gt;
All graphical jobs, by design, must be interactive, so we'll use the srun command. On a headnode, we run the following:&lt;br /&gt;
 # load an X11 enabled application&lt;br /&gt;
 module load Octave&lt;br /&gt;
 # start an X11 job, sbatch arguments are accepted for srun as well, 1 node, 1 hour, 1 gb of memory&lt;br /&gt;
 srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 octave --gui&lt;br /&gt;
&lt;br /&gt;
Because these jobs are interactive, they may not be able to run at all times, depending on how busy the scheduler is at any point in time. '''--pty --x11''' are required arguments setting up the job, and '''octave --gui''' is the command to run inside the job.&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|-&lt;br /&gt;
|3&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;float:left; margin:0; margin-right:-1px; {{{style|}}}&lt;br /&gt;
|-&lt;br /&gt;
| &amp;amp;nbsp;&lt;br /&gt;
|-&lt;br /&gt;
|1&lt;br /&gt;
|-&lt;br /&gt;
|2&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;div style=&amp;quot;overflow-x:auto; white-space:nowrap;&amp;quot;&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot; style=&amp;quot;margin:0; {{{style|}}}&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
|}&amp;lt;/div&amp;gt;&amp;lt;br style=&amp;quot;clear:both&amp;quot;/&amp;gt;&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Nautilus&amp;diff=1125</id>
		<title>Nautilus</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Nautilus&amp;diff=1125"/>
		<updated>2025-06-23T20:17:09Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Nautilus ==&lt;br /&gt;
To access the Nautilus namespace, login using K-State SSO at https://portal.nrp-nautilus.io/ . Once you have done so, email beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] and request to be added to the Beocat Nautilus namespace (ksu-nrp-cluster). Once you have received notification that you have been added to the namespace, you can continue with the following steps to get set up to use the cluster resources. &lt;br /&gt;
&amp;lt;ol&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;SSH into headnode.beocat.ksu.edu&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;SSH into fiona (fiona hosts the kubectl tool we will use for this)&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Once on fiona, use the command ‘cd ~’ to ensure you are in your home directory. If you&lt;br /&gt;
are not, this will return you to the top level of your home directory.&amp;lt;li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;From there you will need to create a .kube directory inside of your home directory. Use&lt;br /&gt;
the command ‘mkdir ~/.kube’&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Login to https://portal.nrp-nautilus.io/ using the same login previously used to create your&lt;br /&gt;
account (this will be your K-State EID login)&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;From here it is MANDATORY to read the cluster policy documentation provided by the&lt;br /&gt;
National Research Platform for the Nautilus program. You can find this here.&lt;br /&gt;
https://docs.nationalresearchplatform.org/userdocs/start/policies/ &amp;lt;/li&amp;gt;&lt;br /&gt;
a. This is to ensure we do not break any of the rules put in place by the NRP.&lt;br /&gt;
&amp;lt;br&amp;gt;b. Return to https://portal.nrp-nautilus.io/ and accept the Acceptable Use Policy (AUP)&lt;br /&gt;
&amp;lt;li&amp;gt;Next, return to the website specified in step 5, in the top right corner of the page press&lt;br /&gt;
the “Get Config” option. &amp;lt;/li&amp;gt;&lt;br /&gt;
a. This will download a file called ‘config’&lt;br /&gt;
&amp;lt;li&amp;gt;You will need to move the file to your ~/.kube directory created in step 4.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. To do this you can copy and paste the contents through the command line&lt;br /&gt;
&amp;lt;br&amp;gt;b. You can also utilize the OpenOnDemand tool to upload the file through the web&lt;br /&gt;
interface. Information for this tool can be found here:&lt;br /&gt;
https://support.beocat.ksu.edu/Docs/OpenOnDemand&lt;br /&gt;
&amp;lt;br&amp;gt;c. You can also use other means of moving the contents to the Beocat&lt;br /&gt;
headnodes/your home directory, but these are just a few examples.&lt;br /&gt;
&amp;lt;br&amp;gt;d. NOTE: Because we added a period before the directory name it is now a hidden directory,&lt;br /&gt;
and the directory will not appear when running a normal ‘ls’, to see the directory you will&lt;br /&gt;
need to run “ls -a” or “ls -la”.&lt;br /&gt;
&amp;lt;li&amp;gt;Once you have read the required documentation, created the .kube directory in your&lt;br /&gt;
home directory, and placed the config file in the '~/.kube' directory, you are now ready to continue!&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Below is an example pod that can be used. It does not request much in the way of resources so you will likely need to change some things. Be sure to change the “name:” field&lt;br /&gt;
underneath “metadata:”. Change the text “test-pod” to “{eid}-pod” where ‘{eid}’ is your&lt;br /&gt;
K-State ID. It will look something like this “dan-pod”.&amp;lt;/li&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=yaml&amp;gt;&lt;br /&gt;
apiVersion: v1&lt;br /&gt;
kind: Pod&lt;br /&gt;
metadata:&lt;br /&gt;
  name: test-pod&lt;br /&gt;
spec:&lt;br /&gt;
  containers:&lt;br /&gt;
  - name: mypod&lt;br /&gt;
    image: ubuntu&lt;br /&gt;
    resources:&lt;br /&gt;
      limits:&lt;br /&gt;
        memory: 400Mi&lt;br /&gt;
        cpu: 100m&lt;br /&gt;
      requests:&lt;br /&gt;
        memory: 100Mi&lt;br /&gt;
        cpu: 100m&lt;br /&gt;
    command: [&amp;quot;sh&amp;quot;, &amp;quot;-c&amp;quot;, &amp;quot;echo 'Im a new pod'&amp;quot;]&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Place your .yaml file in the same directory created earlier (~/.kube).&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;If you are not already in the .kube directory enter the command “cd ~/.kube” to change&lt;br /&gt;
your current directory.&amp;lt;/li&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;Now we are going to create our ‘pod’. This will request a ubuntu pc using the&lt;br /&gt;
specifications from above.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. To do this enter the command “kubectl create -f pod1.yaml” NOTE: You must be&lt;br /&gt;
in the same directory that you placed the pod1.yaml file in (in this situation, the above pod config was put into a file named pod1.yaml).&lt;br /&gt;
&amp;lt;br&amp;gt;b. If the command is successful you will see an output of “pod/{eid}-pod created”.&lt;br /&gt;
&amp;lt;li&amp;gt;You will need to wait until the container for the pod is finished creating. You can check&lt;br /&gt;
this by running “kubectl get pods”&amp;lt;/li&amp;gt;&lt;br /&gt;
a. Once you run this command, it will output all the pods currently running or being&lt;br /&gt;
created in the namespace. Look for yours among the list of pods, the name will&lt;br /&gt;
be the same name specified in step 10.&lt;br /&gt;
&amp;lt;br&amp;gt;b. Once you locate your pod, check its STATUS. If the pod says Running, then you&lt;br /&gt;
are good to proceed. If it says Container Creating, then you will need to wait just a&lt;br /&gt;
bit. It should not take long.&lt;br /&gt;
&amp;lt;li&amp;gt;You can now execute and enter the pod by running “kubectl exec -it {eid}-pod --&lt;br /&gt;
/bin/bash”. Where ‘{eid}-pod’ is the pod created in step 13/the name specified in step 10.&amp;lt;/li&amp;gt;&lt;br /&gt;
a. Executing this command will open the pod you created and run a bash console&lt;br /&gt;
on the pod.&lt;br /&gt;
&amp;lt;br&amp;gt;b. NOTE: If you have trouble logging into the pod, and are met with a “You must be&lt;br /&gt;
logged in to the server, you can run “kubectl proxy”, and after a moment, you can&lt;br /&gt;
cancel the command with a “crtl+c”. This should remedy the error.&lt;br /&gt;
&amp;lt;/ol&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Additional documentation for Kubernetes can be found on the Kubernetes website https://kubernetes.io/docs/home&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=CUDA&amp;diff=1124</id>
		<title>CUDA</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=CUDA&amp;diff=1124"/>
		<updated>2025-06-23T20:16:35Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== CUDA Overview ==&lt;br /&gt;
[[wikipedia:CUDA|CUDA]] is a feature set for programming nVidia [[wikipedia:Graphics_processing_unit|GPUs]]. We have many dwarf nodes that are CUDA-enabled with 1-2 GPUs and most of the Wizard nodes have 4 GPUs each. Most of these are consumer grade [https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/ nVidia 1080 Ti graphics cards] that are good for accelerating 32-bit calculations. Dwarf36-38 have two [https://www.nvidia.com/en-us/design-visualization/rtx-a4000/ nVidia RTX A4000 graphic cards] and dwarf39 has two [https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/ nVidia 1080 Ti graphics cards] that are available for anybody to use but you'll need to email beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] to request being added to the GPU priority group then you'll need to submit jobs with &amp;lt;B&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/B&amp;gt;.  Wizard20 and wizard21 each have two [https://www.nvidia.com/object/quadro-graphics-with-pascal.html nVidia P100 cards] that are much more costly than the consumer grade 1080Ti cards but can accelerate 64-bit calculations much better.&lt;br /&gt;
&lt;br /&gt;
== Training videos ==&lt;br /&gt;
CUDA Programming Model Overview: [http://www.youtube.com/watch?v=aveYOlBSe-Y http://www.youtube.com/watch?v=aveYOlBSe-Y]&lt;br /&gt;
&lt;br /&gt;
{{#widget:YouTube|id=aveYOlBSe-Y|width=800|height=600}}&lt;br /&gt;
&lt;br /&gt;
CUDA Programming Basics Part I (Host functions): [http://www.youtube.com/watch?v=79VARRFwQgY http://www.youtube.com/watch?v=79VARRFwQgY]&lt;br /&gt;
&lt;br /&gt;
{{#widget:YouTube|id=79VARRFwQgY|width=800|height=600}}&lt;br /&gt;
&lt;br /&gt;
CUDA Programming Basics Part II (Device functions): [http://www.youtube.com/watch?v=G5-iI1ogDW4 http://www.youtube.com/watch?v=G5-iI1ogDW4]&lt;br /&gt;
&lt;br /&gt;
{{#widget:YouTube|id=G5-iI1ogDW4|width=800|height=600}}&lt;br /&gt;
== Compiling CUDA Applications ==&lt;br /&gt;
nvcc is the compiler for CUDA applications. When compiling your applications manually you will need to load a CUDA enabled compiler toolchain (e.g. fosscuda):&lt;br /&gt;
&lt;br /&gt;
* module load fosscuda&lt;br /&gt;
* '''Do not run your cuda applications on the headnode. I cannot guarantee it will run, and it will give you terrible results if it does run.'''&lt;br /&gt;
&lt;br /&gt;
With those two things in mind, you can compile CUDA applications as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load fosscuda&lt;br /&gt;
nvcc &amp;lt;source&amp;gt;.cu -o &amp;lt;output&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
=== Create your Application ===&lt;br /&gt;
Copy the following Application into Beocat as vecadd.cu&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
//  Kernel definition, see also section 4.2.3 of Nvidia Cuda Programming Guide&lt;br /&gt;
__global__  void vecAdd(float* A, float* B, float* C)&lt;br /&gt;
{&lt;br /&gt;
            // threadIdx.x is a built-in variable  provided by CUDA at runtime&lt;br /&gt;
            int i = threadIdx.x;&lt;br /&gt;
       A[i]=0;&lt;br /&gt;
       B[i]=i;&lt;br /&gt;
       C[i] = A[i] + B[i];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
#include  &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#define  SIZE 10&lt;br /&gt;
int  main()&lt;br /&gt;
{&lt;br /&gt;
   int N=SIZE;&lt;br /&gt;
   float A[SIZE], B[SIZE], C[SIZE];&lt;br /&gt;
   float *devPtrA;&lt;br /&gt;
   float *devPtrB;&lt;br /&gt;
   float *devPtrC;&lt;br /&gt;
   int memsize= SIZE * sizeof(float);&lt;br /&gt;
&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrA, memsize);&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrB, memsize);&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrC, memsize);&lt;br /&gt;
   cudaMemcpy(devPtrA, A, memsize,  cudaMemcpyHostToDevice);&lt;br /&gt;
   cudaMemcpy(devPtrB, B, memsize,  cudaMemcpyHostToDevice);&lt;br /&gt;
   // __global__ functions are called:  Func&amp;lt;&amp;lt;&amp;lt; Dg, Db, Ns  &amp;gt;&amp;gt;&amp;gt;(parameter);&lt;br /&gt;
   vecAdd&amp;lt;&amp;lt;&amp;lt;1, N&amp;gt;&amp;gt;&amp;gt;(devPtrA,  devPtrB, devPtrC);&lt;br /&gt;
   cudaMemcpy(C, devPtrC, memsize,  cudaMemcpyDeviceToHost);&lt;br /&gt;
&lt;br /&gt;
   for (int i=0; i&amp;lt;SIZE; i++)&lt;br /&gt;
        printf(&amp;quot;C[%d]=%f\n&amp;quot;,i,C[i]);&lt;br /&gt;
&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== Gain Access to a CUDA-capable Node ===&lt;br /&gt;
See our [[AdvancedSlurm|advanced scheduler documentation]]&lt;br /&gt;
=== Compile Your Application ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load fosscuda&lt;br /&gt;
nvcc vecadd.cu -o vecadd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will create a program with the name 'vecadd' (specified by the '-o' flag).&lt;br /&gt;
&lt;br /&gt;
=== Run Your Application ===&lt;br /&gt;
Run the program as you usually would, namely&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./vecadd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assuming you don't want to run the program interactively because this is a large job, you can submit a job via sbatch, just be sure to add '&amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt;' to the '''sbatch''' directive.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1123</id>
		<title>Galaxy File Upload</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1123"/>
		<updated>2025-06-23T20:16:06Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Large File Uploads to Galaxy =&lt;br /&gt;
The Galaxy web UI is sometimes inconsistent with large downloads. As such, we have made user directory imports available on Galaxy. You can upload files to /bulk/galaxy/user-space/, and locating the user space that was created for you on first login. The folder is usually named &amp;quot;eid@ksu.edu&amp;quot; or if you are from another college (VetMed for instance) &amp;quot;eid@vet.k-state.edu&amp;quot;. You can use the command &amp;quot;ls /bulk/galaxy/user-space&amp;quot; to find the name of your directory. &lt;br /&gt;
&lt;br /&gt;
From here, we have written some instructions on how to utilize this upload method.&lt;br /&gt;
== Upload Files ==&lt;br /&gt;
First, you need to transfer the data onto Beocat, this can be done through any number of ways. We have some documentation on uploading data into Beocat that can be found here (for large files, we suggest using Globus, scp or an FTP program): https://support.beocat.ksu.edu/Docs/Main_Page#Transferring_data_to_Beocat&lt;br /&gt;
&lt;br /&gt;
Next, due to how Galaxy handles data exposure to the web UI, we need to move this data to a userspace that was created for you in Galaxy on your first login:&lt;br /&gt;
&lt;br /&gt;
Move the data to the following directory (You should be able to move data here, if you are unable to, please let us know at beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]). Note that the backwards slash is a necessary character to escape the @ symbol from your email address: &lt;br /&gt;
/bulk/galaxy/user-space/eid\@vet.k-state.edu/&lt;br /&gt;
&lt;br /&gt;
Next, login to galaxy.beocat.ksu.edu with your eID through KeyCloak. &lt;br /&gt;
&lt;br /&gt;
Then, navigate to the &amp;quot;Shared Data&amp;quot; tab at the top of the page, in the drop-down menu, navigate to &amp;quot;Data Libraries&amp;quot;. This should take you to a relatively empty page that allows you to create a library with a &amp;quot;+ Library&amp;quot; in the top left of the web page. For reference, I have included a screenshot of my Data Libraries: &lt;br /&gt;
&lt;br /&gt;
[[File:Data library creation.png|CreatingDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
Create a new Library with any name, and then open it by clicking on its name. In this case, from the photo above, you could click on &amp;quot;Test2&amp;quot; or &amp;quot;Upload&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
This takes you to a page that will let you manage the data inside of that library. It should look like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Library explanation.png|ExplainingDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
From here you can either add a folder to help manage your data, upload data to the library, or add the current data in the library to your History so that it can be accessed. We are going to upload data to the library. Clicking &amp;quot;+ Datasets&amp;quot; will expand a drop-down menu, from here, select &amp;quot;from User Directory&amp;quot;. This will allow you to upload any data to galaxy from that /bulk directory we moved your data to earlier on Beocat. That process looks something like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Import files.png|ImportToDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
In this, I just uploaded two empty text files. From here, it has to do some auto-magic to get the data to work inside galaxy, with the &amp;quot;state&amp;quot; of the data. I am not exactly sure what this, or how long it might take. I would imagine not long. &lt;br /&gt;
&lt;br /&gt;
From here we finally need to publish the data to our history as a so that we can utilize it. I am going to import this data as a Dataset, though if a collection suits better, use that. In the photo below, I am uploading &amp;quot;iamanothertextfile.txt&amp;quot; to my new history called &amp;quot;IAmANewHistory&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once that is added a notification should pop up in the bottom right hand corner of the browser. If not, navigate to the homepage and manually open the History your data was added to. From here you should be able to process your data like normal.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=ProposalDescription&amp;diff=1122</id>
		<title>ProposalDescription</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=ProposalDescription&amp;diff=1122"/>
		<updated>2025-06-23T20:15:32Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Description of Beocat for Proposals or Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description of Beocat for Proposals or Information ==&lt;br /&gt;
&lt;br /&gt;
Below is a current description of Beocat Compute resources and availability that can be used within proposals or to provide information. If you have any questions regarding this, please reach out to us at beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket].&lt;br /&gt;
&lt;br /&gt;
=== Compute Resources Description (Updated Oct. 1 2024) ===&lt;br /&gt;
&lt;br /&gt;
Beocat, the K-State research computing cluster, is currently the largest academic supercomputer in Kansas. Its hardware includes nearly 400 researcher-funded computers, approximately 3.3PB of storage, ~10,000 processor cores on machines ranging from dual-processor Xeon e5 nodes with 128GB RAM and 100GbE to 128 core AMD nodes with 2TB RAM connected by 40-100 Gbps networks, and a total of 170 different GPUs ranging from GTX 1080ti's to NVIDIA L40S's. Beocat and its staff have provided tours demonstrating the value of K-State research and a high-tech look at our research facilities for over 3,000 participants, including USD383 StarBase, current and prospective students, funding agencies, faculty recruitment, and outreach activities. Classes supported include topics such as bioinformatics, business analytics, cybersecurity, data science, deep learning, economics, chemistry, and genetics. Beocat is supported by many NSF and university grants, and it acts as the central computing resource for multiple departments across campus. Beocat staff includes one full-time system administrators, a full-time applications scientist with a PhD in Physics and 35 years’ experience optimizing parallel programs and assisting researchers, and a part-time director. &lt;br /&gt;
&lt;br /&gt;
Beocat is available to any academic researcher in Kansas and their partners under the statewide KanShare MOU. Under current policy, heavy users are expected to buy in through adding computational or personnel resources for the cluster (condo computing). Their jobs, then, are given guaranteed priority on any contributed machines, and they have access to other resources in the cluster on an as-available basis. Thus, projects can preserve a guaranteed base level of computation while utilizing the larger cluster for major computations. Users can also purchase archival data storage as needed. Dr. Daniel Andresen is the K-State XSEDE Campus Champion in the event national-class computational resources are required.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=ProposalDescription&amp;diff=1121</id>
		<title>ProposalDescription</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=ProposalDescription&amp;diff=1121"/>
		<updated>2025-06-23T20:15:08Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Compute Resources Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description of Beocat for Proposals or Information ==&lt;br /&gt;
&lt;br /&gt;
Below is a current description of Beocat Compute resources and availability that can be used within proposals or to provide information. If you have any questions regarding this, please reach out to us at beocat@cs.ksu.edu&lt;br /&gt;
&lt;br /&gt;
=== Compute Resources Description (Updated Oct. 1 2024) ===&lt;br /&gt;
&lt;br /&gt;
Beocat, the K-State research computing cluster, is currently the largest academic supercomputer in Kansas. Its hardware includes nearly 400 researcher-funded computers, approximately 3.3PB of storage, ~10,000 processor cores on machines ranging from dual-processor Xeon e5 nodes with 128GB RAM and 100GbE to 128 core AMD nodes with 2TB RAM connected by 40-100 Gbps networks, and a total of 170 different GPUs ranging from GTX 1080ti's to NVIDIA L40S's. Beocat and its staff have provided tours demonstrating the value of K-State research and a high-tech look at our research facilities for over 3,000 participants, including USD383 StarBase, current and prospective students, funding agencies, faculty recruitment, and outreach activities. Classes supported include topics such as bioinformatics, business analytics, cybersecurity, data science, deep learning, economics, chemistry, and genetics. Beocat is supported by many NSF and university grants, and it acts as the central computing resource for multiple departments across campus. Beocat staff includes one full-time system administrators, a full-time applications scientist with a PhD in Physics and 35 years’ experience optimizing parallel programs and assisting researchers, and a part-time director. &lt;br /&gt;
&lt;br /&gt;
Beocat is available to any academic researcher in Kansas and their partners under the statewide KanShare MOU. Under current policy, heavy users are expected to buy in through adding computational or personnel resources for the cluster (condo computing). Their jobs, then, are given guaranteed priority on any contributed machines, and they have access to other resources in the cluster on an as-available basis. Thus, projects can preserve a guaranteed base level of computation while utilizing the larger cluster for major computations. Users can also purchase archival data storage as needed. Dr. Daniel Andresen is the K-State XSEDE Campus Champion in the event national-class computational resources are required.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1120</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1120"/>
		<updated>2025-06-23T20:14:13Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of RHEL Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
* We provide a short description of Beocat for the uses of a proposal or teaching here: [[ProposalDescription|Beocat Info]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
=== Online Documentations ===&lt;br /&gt;
&lt;br /&gt;
* Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/]&lt;br /&gt;
* Read about  [[Installed software]] and languages&lt;br /&gt;
* Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] and download the [[Media:Slurm-quick-reference.pdf|Slurm Quick Reference PDF]]&lt;br /&gt;
* Run interactive jobs with [[OpenOnDemand]]&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]]&lt;br /&gt;
* Big Data course on Beocat! [[BigDataOnBeocat]]&lt;br /&gt;
* Interested in web-based computational biology research? Check out [[GalaxyDocs|Galaxy!]]&lt;br /&gt;
* Looking to utilize the NRP (Nautilus cluster) namespace? Check out [[Nautilus|Nautilus on Beocat]]&lt;br /&gt;
&lt;br /&gt;
=== Training Videos and Slides ===&lt;br /&gt;
&lt;br /&gt;
* [https://www.youtube.com/watch?v=7NOB_HGQE0U Beocat Introduction] and [[Media:Beocat-Beoshock-Intro.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=b_yawpwFRdk Linux and Bash Introduction] and [[Media:Linux-Intro-cheatsheet.pdf|Linux Quick Reference PDF]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=vcC-DURbH6c Advanced HPC Usage] and [[Media:HPC-Advanced-Usage.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=inJbYdZacjs HPC Parallel Computing] and [[Media:HPC-Parallel-Computing.pdf|slides]]&lt;br /&gt;
&lt;br /&gt;
== Transferring data to Beocat ==&lt;br /&gt;
Transferring data to Beocat can be done through a variety of ways, we offer documentation on a few of them:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;b&amp;gt;With the recent changes to how KState handles DUO authentication, we recommend you use Globus to transfer files in and out of Beocat&amp;lt;/b&amp;gt;&lt;br /&gt;
* [[Globus]] - Instructions on transferring files using [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''.&lt;br /&gt;
* [[LinuxBasics]] - Under the 'Transferring files (SCP or SFTP)' section, we have information regarding SCP and SFTP implementation.&lt;br /&gt;
* [[OpenOnDemand]] - We offer GUI based file management through OpenOnDemand&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] - We also offer the ability to transfer data to and from OneDrive&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocathelp@ksu.edu beocathelp@ksu.edu] or or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]. ''Please'' send all email to this address or through TDX and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you email us at beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket], please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=GalaxyDocs&amp;diff=1119</id>
		<title>GalaxyDocs</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=GalaxyDocs&amp;diff=1119"/>
		<updated>2025-06-23T20:12:51Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Galaxy? ==&lt;br /&gt;
[https://galaxyproject.org/ Galaxy] is a scientific workflow, data integration, and data and analysis persistence and publishing platform that aims to make computational biology accessible to research scientists that do not have computer programming or systems administration experience.&lt;br /&gt;
&lt;br /&gt;
== How do I access Galaxy? == &lt;br /&gt;
Access to Beocat's local instance of Galaxy is easy. Simply navigate to [https://galaxy.beocat.ksu.edu/ https://galaxy.beocat.ksu.edu/] and sign in if prompted using the Keycloak login. This will use your Beocat EID and Password to login, you will also be prompted to authenticate against DUO.&lt;br /&gt;
&lt;br /&gt;
Please note that this utilizes Beocat's /bulk directory. This is a billed directory, and as such, if usage in your respective upload directory for Galaxy exceeds the billing threshold (1T), we will contact you regarding remediation.&lt;br /&gt;
&lt;br /&gt;
== Upload Larger Files to Galaxy ==&lt;br /&gt;
Have some larger files that need to be uploaded to Galaxy? We provide some documentation on how to upload files directly to Beocat and then import them to Galaxy for use. It can be found here: [[Galaxy_File_Upload| Galaxy File Upload]]&lt;br /&gt;
&lt;br /&gt;
== How do I use Galaxy? ==&lt;br /&gt;
&lt;br /&gt;
After accessing our local instance of Galaxy you should have access to all installed tools which will be in the tool panel on the left-hand side of your home screen on Galaxy. It should look like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Galaxy Toolbox.png|Tool Panel]]&lt;br /&gt;
&lt;br /&gt;
Each category (Get Data, Send Data, MetaGenomics Tools, etc) is expandable reveal subcategories that help sort out many installed tools. Each tool will be placed under its respective category once it is installed.&lt;br /&gt;
&lt;br /&gt;
From here you can select a tool to use and submit to the Slurm cluster. After opening a tool you will be given options (if available) to configure the tool, including its inputs, what the tool will do (within its constraints) outputs, and what compute resources the tool will submit with. This means that you can specify the resources given to slurm jobs by expanding the drop-down menu &amp;quot;Job Resource Paramters&amp;quot; and selecting &amp;quot;Specify Job resource parameters&amp;quot; which will show the following options to modify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;Processors&amp;lt;/B&amp;gt;: Number of processing cores, 'ppn' value (1-128) this is equivalent to slurms &amp;quot;--cpus-per-task&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;B&amp;gt;Memory&amp;lt;/B&amp;gt;: Memory size in gigabytes, 'pmem' value (1-1500). Note that the job scheduler uses --mem-per-cpu to allocate memory for your slurm job. This mean the number given for memory will be multiplied by the Processors count from above. I.e 2 Processors with 5 Memory will be 10GB of memory.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;B&amp;gt;Priority Queue&amp;lt;/B&amp;gt;: If you have access to a priority queue, and would like to use it, enter the partition name here.&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;B&amp;gt;Runtime&amp;lt;/B&amp;gt;: How long you want your job to run for in hours. &lt;br /&gt;
&lt;br /&gt;
If you fail to switch your job resource parameters, your job will submit with the default resources. This means it will submit with 5Gb of Memory and 1 CPU with 1 hour of Runtime and no Priority queue.&lt;br /&gt;
&lt;br /&gt;
=== Canceling a running job ===&lt;br /&gt;
While Galaxy does submit to slurm, you will not be able to cancel the job in the same way you typically do. With Galaxy, to cancel your upcoming/currently running job simply press the trash can icon next the name of your job.&lt;br /&gt;
&lt;br /&gt;
== Tool Requests ==&lt;br /&gt;
If you are missing a specific tool and would like to have it added to Galaxy, please contact &amp;lt;B&amp;gt;beocathelp@ksu.edu&amp;lt;/B&amp;gt; or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket] with a link to the tool. Additionally, you can browse through Galaxy's own [https://toolshed.g2.bx.psu.edu/ toolshed] to make a recommendation.&lt;br /&gt;
&lt;br /&gt;
== Data Management ==&lt;br /&gt;
&lt;br /&gt;
Galaxy follows the typical costs for bulk data storage, as Galaxy utilizes /bulk/galaxy for storage. Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Billing starting at 1TB of usage. Users can easily see how much data they are utilizing in galaxy by checking the top right corner of the 'home' page of galaxy. This will say &amp;quot;Using ####MB/GB/TB&amp;quot;, above your histories.&lt;br /&gt;
&lt;br /&gt;
[[File:Galaxy_Data_usage_example.png|Usage Data]]&lt;br /&gt;
&lt;br /&gt;
Clicking on this usage will bring you to a storage dashboard where you can easily manage your files and derelict dataset histories.&lt;br /&gt;
&lt;br /&gt;
[[File:Storage_dashboard.png|Storage Dashboard]]&lt;br /&gt;
&lt;br /&gt;
== Requesting Help ==&lt;br /&gt;
To request help with [https://galaxy.beocat.ksu.edu/ https://galaxy.beocat.ksu.edu/], please contact &amp;lt;B&amp;gt;beocathelp@ksu.edu&amp;lt;/B&amp;gt;,or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket].&amp;lt;br&amp;gt;&lt;br /&gt;
When requesting help, it is best to give as much information as possible so that we may solve your issue in a timely manner.&lt;br /&gt;
&lt;br /&gt;
== Acknowledgements ==&lt;br /&gt;
Beocat's installation of UseGalaxy is funded through K-INBRE with an Institutional Development Award (IDeA) from the National Institute of General Medical Sciences of the National Institutes of Health under grant number P20GM103418. &lt;br /&gt;
&lt;br /&gt;
This initiative was started through the Data Science Core group to bring easy to use GUI-based computational biology research to students and researchers at Kansas State University through Beocat.&lt;br /&gt;
&lt;br /&gt;
Additional information on K-INBRE can be found [https://www.k-inbre.org/pages/k-inbre_about_bio-core.html here]&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=RSICC&amp;diff=1118</id>
		<title>RSICC</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=RSICC&amp;diff=1118"/>
		<updated>2025-06-23T20:11:45Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* RSICC Codes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== RSICC Codes ==&lt;br /&gt;
RSICC requires administrators of the system to be licensed for every code (and version) a user wishes to be on the system.&lt;br /&gt;
&lt;br /&gt;
Let us know which RSICC software you'd like to run on Beocat. We will have to become licensed for it before it can be put on the system.&lt;br /&gt;
&lt;br /&gt;
If it is an RSICC software that we've already obtained, you *must* provide proof of a license by providing us with an electronic copy of the email message from RSICC’s Request History tool or a copy of the user’s License Agreement.  The email generated by RSICC’s Request History tool lists all of the software the individual has a license to use.  The Request History link is available on RSICC’s Customer Service webpage, https://rsicc.ornl.gov/CustomerService.aspx Send those e-mails to beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 TDX Ticket]&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=RSICC&amp;diff=1117</id>
		<title>RSICC</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=RSICC&amp;diff=1117"/>
		<updated>2025-06-23T20:11:34Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* RSICC Codes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== RSICC Codes ==&lt;br /&gt;
RSICC requires administrators of the system to be licensed for every code (and version) a user wishes to be on the system.&lt;br /&gt;
&lt;br /&gt;
Let us know which RSICC software you'd like to run on Beocat. We will have to become licensed for it before it can be put on the system.&lt;br /&gt;
&lt;br /&gt;
If it is an RSICC software that we've already obtained, you *must* provide proof of a license by providing us with an electronic copy of the email message from RSICC’s Request History tool or a copy of the user’s License Agreement.  The email generated by RSICC’s Request History tool lists all of the software the individual has a license to use.  The Request History link is available on RSICC’s Customer Service webpage, https://rsicc.ornl.gov/CustomerService.aspx Send those e-mails to beocathelp@ksu.edu or contact Beocat staff through a [https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 | TDX Ticket]&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=RSICC&amp;diff=1116</id>
		<title>RSICC</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=RSICC&amp;diff=1116"/>
		<updated>2025-06-23T20:11:12Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* RSICC Codes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== RSICC Codes ==&lt;br /&gt;
RSICC requires administrators of the system to be licensed for every code (and version) a user wishes to be on the system.&lt;br /&gt;
&lt;br /&gt;
Let us know which RSICC software you'd like to run on Beocat. We will have to become licensed for it before it can be put on the system.&lt;br /&gt;
&lt;br /&gt;
If it is an RSICC software that we've already obtained, you *must* provide proof of a license by providing us with an electronic copy of the email message from RSICC’s Request History tool or a copy of the user’s License Agreement.  The email generated by RSICC’s Request History tool lists all of the software the individual has a license to use.  The Request History link is available on RSICC’s Customer Service webpage, https://rsicc.ornl.gov/CustomerService.aspx Send those e-mails to beocathelp@ksu.edu or contact Beocat staff through a TDX Ticket [[https://support.ksu.edu/TDClient/30/Portal/Requests/ServiceDet?ID=44 | TDXSupport]]&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1115</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1115"/>
		<updated>2025-06-17T15:36:55Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Transferring data to Beocat */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of RHEL Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
* We provide a short description of Beocat for the uses of a proposal or teaching here: [[ProposalDescription|Beocat Info]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
=== Online Documentations ===&lt;br /&gt;
&lt;br /&gt;
* Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/]&lt;br /&gt;
* Read about  [[Installed software]] and languages&lt;br /&gt;
* Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] and download the [[Media:Slurm-quick-reference.pdf|Slurm Quick Reference PDF]]&lt;br /&gt;
* Run interactive jobs with [[OpenOnDemand]]&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]]&lt;br /&gt;
* Big Data course on Beocat! [[BigDataOnBeocat]]&lt;br /&gt;
* Interested in web-based computational biology research? Check out [[GalaxyDocs|Galaxy!]]&lt;br /&gt;
* Looking to utilize the NRP (Nautilus cluster) namespace? Check out [[Nautilus|Nautilus on Beocat]]&lt;br /&gt;
&lt;br /&gt;
=== Training Videos and Slides ===&lt;br /&gt;
&lt;br /&gt;
* [https://www.youtube.com/watch?v=7NOB_HGQE0U Beocat Introduction] and [[Media:Beocat-Beoshock-Intro.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=b_yawpwFRdk Linux and Bash Introduction] and [[Media:Linux-Intro-cheatsheet.pdf|Linux Quick Reference PDF]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=vcC-DURbH6c Advanced HPC Usage] and [[Media:HPC-Advanced-Usage.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=inJbYdZacjs HPC Parallel Computing] and [[Media:HPC-Parallel-Computing.pdf|slides]]&lt;br /&gt;
&lt;br /&gt;
== Transferring data to Beocat ==&lt;br /&gt;
Transferring data to Beocat can be done through a variety of ways, we offer documentation on a few of them:&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;b&amp;gt;With the recent changes to how KState handles DUO authentication, we recommend you use Globus to transfer files in and out of Beocat&amp;lt;/b&amp;gt;&lt;br /&gt;
* [[Globus]] - Instructions on transferring files using [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''.&lt;br /&gt;
* [[LinuxBasics]] - Under the 'Transferring files (SCP or SFTP)' section, we have information regarding SCP and SFTP implementation.&lt;br /&gt;
* [[OpenOnDemand]] - We offer GUI based file management through OpenOnDemand&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] - We also offer the ability to transfer data to and from OneDrive&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you email us at beocat@cs.ksu.edu please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the future, Beocat will be moving towards the Kstate Central IT TDX ticketing system. Please expect a change in how we handle user support in Summer/Fall of 2025.&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1114</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1114"/>
		<updated>2025-06-17T15:36:24Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of RHEL Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
* We provide a short description of Beocat for the uses of a proposal or teaching here: [[ProposalDescription|Beocat Info]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
=== Online Documentations ===&lt;br /&gt;
&lt;br /&gt;
* Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/]&lt;br /&gt;
* Read about  [[Installed software]] and languages&lt;br /&gt;
* Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] and download the [[Media:Slurm-quick-reference.pdf|Slurm Quick Reference PDF]]&lt;br /&gt;
* Run interactive jobs with [[OpenOnDemand]]&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]]&lt;br /&gt;
* Big Data course on Beocat! [[BigDataOnBeocat]]&lt;br /&gt;
* Interested in web-based computational biology research? Check out [[GalaxyDocs|Galaxy!]]&lt;br /&gt;
* Looking to utilize the NRP (Nautilus cluster) namespace? Check out [[Nautilus|Nautilus on Beocat]]&lt;br /&gt;
&lt;br /&gt;
=== Training Videos and Slides ===&lt;br /&gt;
&lt;br /&gt;
* [https://www.youtube.com/watch?v=7NOB_HGQE0U Beocat Introduction] and [[Media:Beocat-Beoshock-Intro.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=b_yawpwFRdk Linux and Bash Introduction] and [[Media:Linux-Intro-cheatsheet.pdf|Linux Quick Reference PDF]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=vcC-DURbH6c Advanced HPC Usage] and [[Media:HPC-Advanced-Usage.pdf|slides]]&lt;br /&gt;
* [https://www.youtube.com/watch?v=inJbYdZacjs HPC Parallel Computing] and [[Media:HPC-Parallel-Computing.pdf|slides]]&lt;br /&gt;
&lt;br /&gt;
== Transferring data to Beocat ==&lt;br /&gt;
Transferring data to Beocat can be done through a variety of ways, we offer documentation on a few of them:&lt;br /&gt;
&amp;lt;b&amp;gt;With the recent changes to how KState handles DUO authentication, we recommend you use Globus to transfer files in and out of Beocat&amp;lt;/b&amp;gt;&lt;br /&gt;
* [[Globus]] - Instructions on transferring files using [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''.&lt;br /&gt;
* [[LinuxBasics]] - Under the 'Transferring files (SCP or SFTP)' section, we have information regarding SCP and SFTP implementation.&lt;br /&gt;
* [[OpenOnDemand]] - We offer GUI based file management through OpenOnDemand&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] - We also offer the ability to transfer data to and from OneDrive&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you email us at beocat@cs.ksu.edu please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the future, Beocat will be moving towards the Kstate Central IT TDX ticketing system. Please expect a change in how we handle user support in Summer/Fall of 2025.&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=GalaxyDocs&amp;diff=1088</id>
		<title>GalaxyDocs</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=GalaxyDocs&amp;diff=1088"/>
		<updated>2025-04-14T17:22:21Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* How do I access Galaxy? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Galaxy? ==&lt;br /&gt;
[https://galaxyproject.org/ Galaxy] is a scientific workflow, data integration, and data and analysis persistence and publishing platform that aims to make computational biology accessible to research scientists that do not have computer programming or systems administration experience.&lt;br /&gt;
&lt;br /&gt;
== How do I access Galaxy? == &lt;br /&gt;
Access to Beocat's local instance of Galaxy is easy. Simply navigate to [https://galaxy.beocat.ksu.edu/ https://galaxy.beocat.ksu.edu/] and sign in if prompted using the Keycloak login. This will use your Beocat EID and Password to login, you will also be prompted to authenticate against DUO.&lt;br /&gt;
&lt;br /&gt;
Please note that this utilizes Beocat's /bulk directory. This is a billed directory, and as such, if usage in your respective upload directory for Galaxy exceeds the billing threshold (1T), we will contact you regarding remediation.&lt;br /&gt;
&lt;br /&gt;
== Upload Larger Files to Galaxy ==&lt;br /&gt;
Have some larger files that need to be uploaded to Galaxy? We provide some documentation on how to upload files directly to Beocat and then import them to Galaxy for use. It can be found here: [[Galaxy_File_Upload| Galaxy File Upload]]&lt;br /&gt;
&lt;br /&gt;
== How do I use Galaxy? ==&lt;br /&gt;
&lt;br /&gt;
After accessing our local instance of Galaxy you should have access to all installed tools which will be in the tool panel on the left-hand side of your home screen on Galaxy. It should look like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Galaxy Toolbox.png|Tool Panel]]&lt;br /&gt;
&lt;br /&gt;
Each category (Get Data, Send Data, MetaGenomics Tools, etc) is expandable reveal subcategories that help sort out many installed tools. Each tool will be placed under its respective category once it is installed.&lt;br /&gt;
&lt;br /&gt;
From here you can select a tool to use and submit to the Slurm cluster. After opening a tool you will be given options (if available) to configure the tool, including its inputs, what the tool will do (within its constraints) outputs, and what compute resources the tool will submit with. This means that you can specify the resources given to slurm jobs by expanding the drop-down menu &amp;quot;Job Resource Paramters&amp;quot; and selecting &amp;quot;Specify Job resource parameters&amp;quot; which will show the following options to modify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;Processors&amp;lt;/B&amp;gt;: Number of processing cores, 'ppn' value (1-128) this is equivalent to slurms &amp;quot;--cpus-per-task&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;B&amp;gt;Memory&amp;lt;/B&amp;gt;: Memory size in gigabytes, 'pmem' value (1-1500). Note that the job scheduler uses --mem-per-cpu to allocate memory for your slurm job. This mean the number given for memory will be multiplied by the Processors count from above. I.e 2 Processors with 5 Memory will be 10GB of memory.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;B&amp;gt;Priority Queue&amp;lt;/B&amp;gt;: If you have access to a priority queue, and would like to use it, enter the partition name here.&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;B&amp;gt;Runtime&amp;lt;/B&amp;gt;: How long you want your job to run for in hours. &lt;br /&gt;
&lt;br /&gt;
If you fail to switch your job resource parameters, your job will submit with the default resources. This means it will submit with 5Gb of Memory and 1 CPU with 1 hour of Runtime and no Priority queue.&lt;br /&gt;
&lt;br /&gt;
=== Canceling a running job ===&lt;br /&gt;
While Galaxy does submit to slurm, you will not be able to cancel the job in the same way you typically do. With Galaxy, to cancel your upcoming/currently running job simply press the trash can icon next the name of your job.&lt;br /&gt;
&lt;br /&gt;
== Tool Requests ==&lt;br /&gt;
If you are missing a specific tool and would like to have it added to Galaxy, please contact &amp;lt;B&amp;gt;beocat@cs.ksu.edu&amp;lt;/B&amp;gt; with a link to the tool. Additionally, you can browse through Galaxy's own [https://toolshed.g2.bx.psu.edu/ toolshed] to make a recommendation.&lt;br /&gt;
&lt;br /&gt;
== Data Management ==&lt;br /&gt;
&lt;br /&gt;
Galaxy follows the typical costs for bulk data storage, as Galaxy utilizes /bulk/galaxy for storage. Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Billing starting at 1TB of usage. Users can easily see how much data they are utilizing in galaxy by checking the top right corner of the 'home' page of galaxy. This will say &amp;quot;Using ####MB/GB/TB&amp;quot;, above your histories.&lt;br /&gt;
&lt;br /&gt;
[[File:Galaxy_Data_usage_example.png|Usage Data]]&lt;br /&gt;
&lt;br /&gt;
Clicking on this usage will bring you to a storage dashboard where you can easily manage your files and derelict dataset histories.&lt;br /&gt;
&lt;br /&gt;
[[File:Storage_dashboard.png|Storage Dashboard]]&lt;br /&gt;
&lt;br /&gt;
== Requesting Help ==&lt;br /&gt;
To request help with [https://galaxy.beocat.ksu.edu/ https://galaxy.beocat.ksu.edu/], please contact &amp;lt;B&amp;gt;beocat@cs.ksu.edu&amp;lt;/B&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
When requesting help, it is best to give as much information as possible so that we may solve your issue in a timely manner.&lt;br /&gt;
&lt;br /&gt;
== Acknowledgements ==&lt;br /&gt;
Beocat's installation of UseGalaxy is funded through K-INBRE with an Institutional Development Award (IDeA) from the National Institute of General Medical Sciences of the National Institutes of Health under grant number P20GM103418. &lt;br /&gt;
&lt;br /&gt;
This initiative was started through the Data Science Core group to bring easy to use GUI-based computational biology research to students and researchers at Kansas State University through Beocat.&lt;br /&gt;
&lt;br /&gt;
Additional information on K-INBRE can be found [https://www.k-inbre.org/pages/k-inbre_about_bio-core.html here]&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=GalaxyDocs&amp;diff=1087</id>
		<title>GalaxyDocs</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=GalaxyDocs&amp;diff=1087"/>
		<updated>2025-04-14T16:31:56Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Galaxy? ==&lt;br /&gt;
[https://galaxyproject.org/ Galaxy] is a scientific workflow, data integration, and data and analysis persistence and publishing platform that aims to make computational biology accessible to research scientists that do not have computer programming or systems administration experience.&lt;br /&gt;
&lt;br /&gt;
== How do I access Galaxy? == &lt;br /&gt;
Access to Beocat's local instance of Galaxy is easy. Simply navigate to [https://galaxy.beocat.ksu.edu/ https://galaxy.beocat.ksu.edu/] and sign in if prompted using the Keycloak login. This will use your Beocat EID and Password to login, you will also be prompted to authenticate against DUO.&lt;br /&gt;
&lt;br /&gt;
== Upload Larger Files to Galaxy ==&lt;br /&gt;
Have some larger files that need to be uploaded to Galaxy? We provide some documentation on how to upload files directly to Beocat and then import them to Galaxy for use. It can be found here: [[Galaxy_File_Upload| Galaxy File Upload]]&lt;br /&gt;
&lt;br /&gt;
== How do I use Galaxy? ==&lt;br /&gt;
&lt;br /&gt;
After accessing our local instance of Galaxy you should have access to all installed tools which will be in the tool panel on the left-hand side of your home screen on Galaxy. It should look like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Galaxy Toolbox.png|Tool Panel]]&lt;br /&gt;
&lt;br /&gt;
Each category (Get Data, Send Data, MetaGenomics Tools, etc) is expandable reveal subcategories that help sort out many installed tools. Each tool will be placed under its respective category once it is installed.&lt;br /&gt;
&lt;br /&gt;
From here you can select a tool to use and submit to the Slurm cluster. After opening a tool you will be given options (if available) to configure the tool, including its inputs, what the tool will do (within its constraints) outputs, and what compute resources the tool will submit with. This means that you can specify the resources given to slurm jobs by expanding the drop-down menu &amp;quot;Job Resource Paramters&amp;quot; and selecting &amp;quot;Specify Job resource parameters&amp;quot; which will show the following options to modify.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;Processors&amp;lt;/B&amp;gt;: Number of processing cores, 'ppn' value (1-128) this is equivalent to slurms &amp;quot;--cpus-per-task&amp;quot;.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;B&amp;gt;Memory&amp;lt;/B&amp;gt;: Memory size in gigabytes, 'pmem' value (1-1500). Note that the job scheduler uses --mem-per-cpu to allocate memory for your slurm job. This mean the number given for memory will be multiplied by the Processors count from above. I.e 2 Processors with 5 Memory will be 10GB of memory.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;B&amp;gt;Priority Queue&amp;lt;/B&amp;gt;: If you have access to a priority queue, and would like to use it, enter the partition name here.&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;B&amp;gt;Runtime&amp;lt;/B&amp;gt;: How long you want your job to run for in hours. &lt;br /&gt;
&lt;br /&gt;
If you fail to switch your job resource parameters, your job will submit with the default resources. This means it will submit with 5Gb of Memory and 1 CPU with 1 hour of Runtime and no Priority queue.&lt;br /&gt;
&lt;br /&gt;
=== Canceling a running job ===&lt;br /&gt;
While Galaxy does submit to slurm, you will not be able to cancel the job in the same way you typically do. With Galaxy, to cancel your upcoming/currently running job simply press the trash can icon next the name of your job.&lt;br /&gt;
&lt;br /&gt;
== Tool Requests ==&lt;br /&gt;
If you are missing a specific tool and would like to have it added to Galaxy, please contact &amp;lt;B&amp;gt;beocat@cs.ksu.edu&amp;lt;/B&amp;gt; with a link to the tool. Additionally, you can browse through Galaxy's own [https://toolshed.g2.bx.psu.edu/ toolshed] to make a recommendation.&lt;br /&gt;
&lt;br /&gt;
== Data Management ==&lt;br /&gt;
&lt;br /&gt;
Galaxy follows the typical costs for bulk data storage, as Galaxy utilizes /bulk/galaxy for storage. Bulk data storage may be provided at a cost of $45/TB/year billed monthly. Billing starting at 1TB of usage. Users can easily see how much data they are utilizing in galaxy by checking the top right corner of the 'home' page of galaxy. This will say &amp;quot;Using ####MB/GB/TB&amp;quot;, above your histories.&lt;br /&gt;
&lt;br /&gt;
[[File:Galaxy_Data_usage_example.png|Usage Data]]&lt;br /&gt;
&lt;br /&gt;
Clicking on this usage will bring you to a storage dashboard where you can easily manage your files and derelict dataset histories.&lt;br /&gt;
&lt;br /&gt;
[[File:Storage_dashboard.png|Storage Dashboard]]&lt;br /&gt;
&lt;br /&gt;
== Requesting Help ==&lt;br /&gt;
To request help with [https://galaxy.beocat.ksu.edu/ https://galaxy.beocat.ksu.edu/], please contact &amp;lt;B&amp;gt;beocat@cs.ksu.edu&amp;lt;/B&amp;gt;.&amp;lt;br&amp;gt;&lt;br /&gt;
When requesting help, it is best to give as much information as possible so that we may solve your issue in a timely manner.&lt;br /&gt;
&lt;br /&gt;
== Acknowledgements ==&lt;br /&gt;
Beocat's installation of UseGalaxy is funded through K-INBRE with an Institutional Development Award (IDeA) from the National Institute of General Medical Sciences of the National Institutes of Health under grant number P20GM103418. &lt;br /&gt;
&lt;br /&gt;
This initiative was started through the Data Science Core group to bring easy to use GUI-based computational biology research to students and researchers at Kansas State University through Beocat.&lt;br /&gt;
&lt;br /&gt;
Additional information on K-INBRE can be found [https://www.k-inbre.org/pages/k-inbre_about_bio-core.html here]&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1086</id>
		<title>Galaxy File Upload</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1086"/>
		<updated>2025-04-14T16:30:26Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Upload Files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Large File Uploads to Galaxy =&lt;br /&gt;
The Galaxy web UI is sometimes inconsistent with large downloads. As such, we have made user directory imports available on Galaxy. You can upload files to /bulk/galaxy/user-space/, and locating the user space that was created for you on first login. The folder is usually named &amp;quot;eid@ksu.edu&amp;quot; or if you are from another college (VetMed for instance) &amp;quot;eid@vet.k-state.edu&amp;quot;. You can use the command &amp;quot;ls /bulk/galaxy/user-space&amp;quot; to find the name of your directory. &lt;br /&gt;
&lt;br /&gt;
From here, we have written some instructions on how to utilize this upload method.&lt;br /&gt;
== Upload Files ==&lt;br /&gt;
First, you need to transfer the data onto Beocat, this can be done through any number of ways. We have some documentation on uploading data into Beocat that can be found here (for large files, we suggest using Globus, scp or an FTP program): https://support.beocat.ksu.edu/Docs/Main_Page#Transferring_data_to_Beocat&lt;br /&gt;
&lt;br /&gt;
Next, due to how Galaxy handles data exposure to the web UI, we need to move this data to a userspace that was created for you in Galaxy on your first login:&lt;br /&gt;
&lt;br /&gt;
Move the data to the following directory (You should be able to move data here, if you are unable to, please let us know at beocat@cs.ksu.edu). Note that the backwards slash is a necessary character to escape the @ symbol from your email address: &lt;br /&gt;
/bulk/galaxy/user-space/eid\@vet.k-state.edu/&lt;br /&gt;
&lt;br /&gt;
Next, login to galaxy.beocat.ksu.edu with your eID through KeyCloak. &lt;br /&gt;
&lt;br /&gt;
Then, navigate to the &amp;quot;Shared Data&amp;quot; tab at the top of the page, in the drop-down menu, navigate to &amp;quot;Data Libraries&amp;quot;. This should take you to a relatively empty page that allows you to create a library with a &amp;quot;+ Library&amp;quot; in the top left of the web page. For reference, I have included a screenshot of my Data Libraries: &lt;br /&gt;
&lt;br /&gt;
[[File:Data library creation.png|CreatingDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
Create a new Library with any name, and then open it by clicking on its name. In this case, from the photo above, you could click on &amp;quot;Test2&amp;quot; or &amp;quot;Upload&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
This takes you to a page that will let you manage the data inside of that library. It should look like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Library explanation.png|ExplainingDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
From here you can either add a folder to help manage your data, upload data to the library, or add the current data in the library to your History so that it can be accessed. We are going to upload data to the library. Clicking &amp;quot;+ Datasets&amp;quot; will expand a drop-down menu, from here, select &amp;quot;from User Directory&amp;quot;. This will allow you to upload any data to galaxy from that /bulk directory we moved your data to earlier on Beocat. That process looks something like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Import files.png|ImportToDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
In this, I just uploaded two empty text files. From here, it has to do some auto-magic to get the data to work inside galaxy, with the &amp;quot;state&amp;quot; of the data. I am not exactly sure what this, or how long it might take. I would imagine not long. &lt;br /&gt;
&lt;br /&gt;
From here we finally need to publish the data to our history as a so that we can utilize it. I am going to import this data as a Dataset, though if a collection suits better, use that. In the photo below, I am uploading &amp;quot;iamanothertextfile.txt&amp;quot; to my new history called &amp;quot;IAmANewHistory&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once that is added a notification should pop up in the bottom right hand corner of the browser. If not, navigate to the homepage and manually open the History your data was added to. From here you should be able to process your data like normal.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1085</id>
		<title>Galaxy File Upload</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1085"/>
		<updated>2025-04-14T16:28:35Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Large File Uploads to Galaxy =&lt;br /&gt;
The Galaxy web UI is sometimes inconsistent with large downloads. As such, we have made user directory imports available on Galaxy. You can upload files to /bulk/galaxy/user-space/, and locating the user space that was created for you on first login. The folder is usually named &amp;quot;eid@ksu.edu&amp;quot; or if you are from another college (VetMed for instance) &amp;quot;eid@vet.k-state.edu&amp;quot;. You can use the command &amp;quot;ls /bulk/galaxy/user-space&amp;quot; to find the name of your directory. &lt;br /&gt;
&lt;br /&gt;
From here, we have written some instructions on how to utilize this upload method.&lt;br /&gt;
== Upload Files ==&lt;br /&gt;
First, you need to transfer the data onto Beocat, this can be done through any number of ways. We have some documentation on uploading data into Beocat that can be found here (for large files, we suggest using Globus, scp or an FTP program): https://support.beocat.ksu.edu/Docs/Main_Page#Transferring_data_to_Beocat&lt;br /&gt;
&lt;br /&gt;
Next, due to how Galaxy handles data exposure to the web UI, we need to move this data to a userspace that was created for you in Galaxy on your first login:&lt;br /&gt;
&lt;br /&gt;
Move the data to the following directory (You should be able to move data here, if you are unable to, please let me know). Note that the backwards slash is a necessary character to escape the @ symbol from your email address: &lt;br /&gt;
/bulk/galaxy/user-space/sghimire1\@vet.k-state.edu/&lt;br /&gt;
&lt;br /&gt;
Next, login to galaxy.beocat.ksu.edu with your eID through KeyCloak. &lt;br /&gt;
&lt;br /&gt;
Then, navigate to the &amp;quot;Shared Data&amp;quot; tab at the top of the page, in the drop-down menu, navigate to &amp;quot;Data Libraries&amp;quot;. This should take you to a relatively empty page that allows you to create a library with a &amp;quot;+ Library&amp;quot; in the top left of the web page. For reference, I have included a screenshot of my Data Libraries: &lt;br /&gt;
&lt;br /&gt;
[[File:Data library creation.png|CreatingDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
Create a new Library with any name, and then open it by clicking on its name. In this case, from the photo above, you could click on &amp;quot;Test2&amp;quot; or &amp;quot;Upload&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
This takes you to a page that will let you manage the data inside of that library. It should look like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Library explanation.png|ExplainingDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
From here you can either add a folder to help manage your data, upload data to the library, or add the current data in the library to your History so that it can be accessed. We are going to upload data to the library. Clicking &amp;quot;+ Datasets&amp;quot; will expand a drop-down menu, from here, select &amp;quot;from User Directory&amp;quot;. This will allow you to upload any data to galaxy from that /bulk directory we moved your data to earlier on Beocat. That process looks something like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Import files.png|ImportToDataLibrary]]&lt;br /&gt;
&lt;br /&gt;
In this, I just uploaded two empty text files. From here, it has to do some auto-magic to get the data to work inside galaxy, with the &amp;quot;state&amp;quot; of the data. I am not exactly sure what this, or how long it might take. I would imagine not long. &lt;br /&gt;
&lt;br /&gt;
From here we finally need to publish the data to our history as a so that we can utilize it. I am going to import this data as a Dataset, though if a collection suits better, use that. In the photo below, I am uploading &amp;quot;iamanothertextfile.txt&amp;quot; to my new history called &amp;quot;IAmANewHistory&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once that is added a little notification should pop up in the bottom right hand corner of the browser. If not, navigate to the homepage and manually open the History your data was added to. From here you should be able to process your data like normal.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1084</id>
		<title>Galaxy File Upload</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1084"/>
		<updated>2025-04-14T16:27:39Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Upload Files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Large File Uploads to Galaxy =&lt;br /&gt;
The Galaxy web UI is sometimes inconsistent with large downloads. As such, we have made user directory imports available on Galaxy. You can upload files to /bulk/galaxy/user-space/, and locating the user space that was created for you on first login. The folder is usually named &amp;quot;eid@ksu.edu&amp;quot; or if you are from another college (VetMed for instance) &amp;quot;eid@vet.k-state.edu&amp;quot;. You can use the command &amp;quot;ls /bulk/galaxy/user-space&amp;quot; to find the name of your directory. &lt;br /&gt;
&lt;br /&gt;
From here, we have written some instructions on how to utilize this upload method.&lt;br /&gt;
== Upload Files ==&lt;br /&gt;
First, you need to transfer the data onto Beocat, this can be done through any number of ways. We have some documentation on uploading data into Beocat that can be found here (for large files, we suggest using Globus, scp or an FTP program): https://support.beocat.ksu.edu/Docs/Main_Page#Transferring_data_to_Beocat&lt;br /&gt;
&lt;br /&gt;
Next, due to how Galaxy handles data exposure to the web UI, we need to move this data to a userspace that was created for you in Galaxy on your first login:&lt;br /&gt;
&lt;br /&gt;
Move the data to the following directory (You should be able to move data here, if you are unable to, please let me know). Note that the backwards slash is a necessary character to escape the @ symbol from your email address: &lt;br /&gt;
/bulk/galaxy/user-space/sghimire1\@vet.k-state.edu/&lt;br /&gt;
&lt;br /&gt;
Next, login to galaxy.beocat.ksu.edu with your eID through KeyCloak. &lt;br /&gt;
&lt;br /&gt;
Then, navigate to the &amp;quot;Shared Data&amp;quot; tab at the top of the page, in the drop-down menu, navigate to &amp;quot;Data Libraries&amp;quot;. This should take you to a relatively empty page that allows you to create a library with a &amp;quot;+ Library&amp;quot; in the top left of the web page. For reference, I have included a screenshot of my Data Libraries: &lt;br /&gt;
&lt;br /&gt;
[[File:Data library creation.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
Create a new Library with any name, and then open it by clicking on its name. In this case, from the photo above, you could click on &amp;quot;Test2&amp;quot; or &amp;quot;Upload&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
This takes you to a page that will let you manage the data inside of that library. It should look like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Library explanation.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
From here you can either add a folder to help manage your data, upload data to the library, or add the current data in the library to your History so that it can be accessed. We are going to upload data to the library. Clicking &amp;quot;+ Datasets&amp;quot; will expand a drop-down menu, from here, select &amp;quot;from User Directory&amp;quot;. This will allow you to upload any data to galaxy from that /bulk directory we moved your data to earlier on Beocat. That process looks something like this: &lt;br /&gt;
&lt;br /&gt;
[[File:Import files.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
In this, I just uploaded two empty text files. From here, it has to do some auto-magic to get the data to work inside galaxy, with the &amp;quot;state&amp;quot; of the data. I am not exactly sure what this, or how long it might take. I would imagine not long. &lt;br /&gt;
&lt;br /&gt;
From here we finally need to publish the data to our history as a so that we can utilize it. I am going to import this data as a Dataset, though if a collection suits better, use that. In the photo below, I am uploading &amp;quot;iamanothertextfile.txt&amp;quot; to my new history called &amp;quot;IAmANewHistory&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once that is added a little notification should pop up in the bottom right hand corner of the browser. If not, navigate to the homepage and manually open the History your data was added to. From here you should be able to process your data like normal.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1083</id>
		<title>Galaxy File Upload</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Galaxy_File_Upload&amp;diff=1083"/>
		<updated>2025-04-14T16:27:00Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: Created page with &amp;quot;= Large File Uploads to Galaxy = The Galaxy web UI is sometimes inconsistent with large downloads. As such, we have made user directory imports available on Galaxy. You can upload files to /bulk/galaxy/user-space/, and locating the user space that was created for you on first login. The folder is usually named &amp;quot;eid@ksu.edu&amp;quot; or if you are from another college (VetMed for instance) &amp;quot;eid@vet.k-state.edu&amp;quot;. You can use the command &amp;quot;ls /bulk/galaxy/user-space&amp;quot; to find the name...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Large File Uploads to Galaxy =&lt;br /&gt;
The Galaxy web UI is sometimes inconsistent with large downloads. As such, we have made user directory imports available on Galaxy. You can upload files to /bulk/galaxy/user-space/, and locating the user space that was created for you on first login. The folder is usually named &amp;quot;eid@ksu.edu&amp;quot; or if you are from another college (VetMed for instance) &amp;quot;eid@vet.k-state.edu&amp;quot;. You can use the command &amp;quot;ls /bulk/galaxy/user-space&amp;quot; to find the name of your directory. &lt;br /&gt;
&lt;br /&gt;
From here, we have written some instructions on how to utilize this upload method.&lt;br /&gt;
== Upload Files ==&lt;br /&gt;
First, you need to transfer the data onto Beocat, this can be done through any number of ways. We have some documentation on uploading data into Beocat that can be found here (for large files, we suggest using Globus, scp or an FTP program): https://support.beocat.ksu.edu/Docs/Main_Page#Transferring_data_to_Beocat&lt;br /&gt;
&lt;br /&gt;
Next, due to how Galaxy handles data exposure to the web UI, we need to move this data to a userspace that was created for you in Galaxy on your first login:&lt;br /&gt;
&lt;br /&gt;
Move the data to the following directory (You should be able to move data here, if you are unable to, please let me know). Note that the backwards slash is a necessary character to escape the @ symbol from your email address: &lt;br /&gt;
/bulk/galaxy/user-space/sghimire1\@vet.k-state.edu/&lt;br /&gt;
&lt;br /&gt;
Next, login to galaxy.beocat.ksu.edu with your eID through KeyCloak. &lt;br /&gt;
&lt;br /&gt;
Then, navigate to the &amp;quot;Shared Data&amp;quot; tab at the top of the page, in the drop-down menu, navigate to &amp;quot;Data Libraries&amp;quot;. This should take you to a relatively empty page that allows you to create a library with a &amp;quot;+ Library&amp;quot; in the top left of the web page. For reference, I have included a screenshot of my Data Libraries: &lt;br /&gt;
[[File:Data library creation.png|thumb]]&lt;br /&gt;
Create a new Library with any name, and then open it by clicking on its name. In this case, from the photo above, you could click on &amp;quot;Test2&amp;quot; or &amp;quot;Upload&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
This takes you to a page that will let you manage the data inside of that library. It should look like this: &lt;br /&gt;
[[File:Library explanation.png|thumb]]&lt;br /&gt;
From here you can either add a folder to help manage your data, upload data to the library, or add the current data in the library to your History so that it can be accessed. We are going to upload data to the library. Clicking &amp;quot;+ Datasets&amp;quot; will expand a drop-down menu, from here, select &amp;quot;from User Directory&amp;quot;. This will allow you to upload any data to galaxy from that /bulk directory we moved your data to earlier on Beocat. That process looks something like this: &lt;br /&gt;
[[File:Import files.png|thumb]]&lt;br /&gt;
In this, I just uploaded two empty text files. From here, it has to do some auto-magic to get the data to work inside galaxy, with the &amp;quot;state&amp;quot; of the data. I am not exactly sure what this, or how long it might take. I would imagine not long. &lt;br /&gt;
&lt;br /&gt;
From here we finally need to publish the data to our history as a so that we can utilize it. I am going to import this data as a Dataset, though if a collection suits better, use that. In the photo below, I am uploading &amp;quot;iamanothertextfile.txt&amp;quot; to my new history called &amp;quot;IAmANewHistory&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once that is added a little notification should pop up in the bottom right hand corner of the browser. If not, navigate to the homepage and manually open the History your data was added to. From here you should be able to process your data like normal.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=File:Import_files.png&amp;diff=1082</id>
		<title>File:Import files.png</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=File:Import_files.png&amp;diff=1082"/>
		<updated>2025-04-14T16:26:53Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;import files from Beocat via /bulk&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=File:Library_explanation.png&amp;diff=1081</id>
		<title>File:Library explanation.png</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=File:Library_explanation.png&amp;diff=1081"/>
		<updated>2025-04-14T16:26:25Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Explanation of what to do when inside of Data Library.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=File:Data_library_creation.png&amp;diff=1080</id>
		<title>File:Data library creation.png</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=File:Data_library_creation.png&amp;diff=1080"/>
		<updated>2025-04-14T16:25:54Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;creation of Data Library on Galaxy&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1079</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1079"/>
		<updated>2025-04-10T18:46:56Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* How do I get help? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of RHEL Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
* We provide a short description of Beocat for the uses of a proposal or teaching here: [[ProposalDescription|Beocat Info]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
==== Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] ====&lt;br /&gt;
==== Read about  [[Installed software]] and languages ====&lt;br /&gt;
==== Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] ====&lt;br /&gt;
==== Run Interactive Jobs! [[OpenOnDemand]] ====&lt;br /&gt;
==== [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] ====&lt;br /&gt;
&lt;br /&gt;
==== Big Data course on Beocat! [[BigDataOnBeocat]] ====&lt;br /&gt;
==== Interested in Web-Based computational biology research? Check out [[GalaxyDocs|Galaxy!]] ====&lt;br /&gt;
==== Looking to utilize the NRP (Nautilus cluster) namespace? Check out [[Nautilus|Nautilus on Beocat]] ====&lt;br /&gt;
&lt;br /&gt;
== Transferring data to Beocat ==&lt;br /&gt;
Transferring data to Beocat can be done through a variety of ways, we offer documentation on a few of them:&lt;br /&gt;
* [[LinuxBasics]] - Under the 'Transferring files (SCP or SFTP)' section, we have information regarding SCP and SFTP implementation.&lt;br /&gt;
* [[Globus]] - Instructions on transferring files using [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''.&lt;br /&gt;
* [[OpenOnDemand]] - We offer GUI based file management through OpenOnDemand&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] - We also offer the ability to transfer data to and from OneDrive&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you email us at beocat@cs.ksu.edu please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the future, Beocat will be moving towards the Kstate Central IT TDX ticketing system. Please expect a change in how we handle user support in Summer/Fall of 2025.&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1078</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1078"/>
		<updated>2025-04-10T18:46:41Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* How do I get help? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of RHEL Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
* We provide a short description of Beocat for the uses of a proposal or teaching here: [[ProposalDescription|Beocat Info]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
==== Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] ====&lt;br /&gt;
==== Read about  [[Installed software]] and languages ====&lt;br /&gt;
==== Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] ====&lt;br /&gt;
==== Run Interactive Jobs! [[OpenOnDemand]] ====&lt;br /&gt;
==== [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] ====&lt;br /&gt;
&lt;br /&gt;
==== Big Data course on Beocat! [[BigDataOnBeocat]] ====&lt;br /&gt;
==== Interested in Web-Based computational biology research? Check out [[GalaxyDocs|Galaxy!]] ====&lt;br /&gt;
==== Looking to utilize the NRP (Nautilus cluster) namespace? Check out [[Nautilus|Nautilus on Beocat]] ====&lt;br /&gt;
&lt;br /&gt;
== Transferring data to Beocat ==&lt;br /&gt;
Transferring data to Beocat can be done through a variety of ways, we offer documentation on a few of them:&lt;br /&gt;
* [[LinuxBasics]] - Under the 'Transferring files (SCP or SFTP)' section, we have information regarding SCP and SFTP implementation.&lt;br /&gt;
* [[Globus]] - Instructions on transferring files using [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''.&lt;br /&gt;
* [[OpenOnDemand]] - We offer GUI based file management through OpenOnDemand&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] - We also offer the ability to transfer data to and from OneDrive&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you email us at beocat@cs.ksu.edu please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the future, Beocat will be moving towards the Kstate Central IT TDX ticketing system. Please expect a change in how we handle user support in summer/fall of 2025.&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1077</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1077"/>
		<updated>2025-04-10T18:45:53Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* How do I get help? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of RHEL Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
* We provide a short description of Beocat for the uses of a proposal or teaching here: [[ProposalDescription|Beocat Info]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
==== Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] ====&lt;br /&gt;
==== Read about  [[Installed software]] and languages ====&lt;br /&gt;
==== Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] ====&lt;br /&gt;
==== Run Interactive Jobs! [[OpenOnDemand]] ====&lt;br /&gt;
==== [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] ====&lt;br /&gt;
&lt;br /&gt;
==== Big Data course on Beocat! [[BigDataOnBeocat]] ====&lt;br /&gt;
==== Interested in Web-Based computational biology research? Check out [[GalaxyDocs|Galaxy!]] ====&lt;br /&gt;
==== Looking to utilize the NRP (Nautilus cluster) namespace? Check out [[Nautilus|Nautilus on Beocat]] ====&lt;br /&gt;
&lt;br /&gt;
== Transferring data to Beocat ==&lt;br /&gt;
Transferring data to Beocat can be done through a variety of ways, we offer documentation on a few of them:&lt;br /&gt;
* [[LinuxBasics]] - Under the 'Transferring files (SCP or SFTP)' section, we have information regarding SCP and SFTP implementation.&lt;br /&gt;
* [[Globus]] - Instructions on transferring files using [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''.&lt;br /&gt;
* [[OpenOnDemand]] - We offer GUI based file management through OpenOnDemand&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] - We also offer the ability to transfer data to and from OneDrive&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you email us at beocat@cs.ksu.edu please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the future, Beocat will be moving towards the Kstate Central IT TDX ticketing system, please expect a change in how we handle user support in summer/fall of 2025.&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1076</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1076"/>
		<updated>2025-04-10T18:45:27Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* How do I get help? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of RHEL Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
* We provide a short description of Beocat for the uses of a proposal or teaching here: [[ProposalDescription|Beocat Info]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
==== Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] ====&lt;br /&gt;
==== Read about  [[Installed software]] and languages ====&lt;br /&gt;
==== Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] ====&lt;br /&gt;
==== Run Interactive Jobs! [[OpenOnDemand]] ====&lt;br /&gt;
==== [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] ====&lt;br /&gt;
&lt;br /&gt;
==== Big Data course on Beocat! [[BigDataOnBeocat]] ====&lt;br /&gt;
==== Interested in Web-Based computational biology research? Check out [[GalaxyDocs|Galaxy!]] ====&lt;br /&gt;
==== Looking to utilize the NRP (Nautilus cluster) namespace? Check out [[Nautilus|Nautilus on Beocat]] ====&lt;br /&gt;
&lt;br /&gt;
== Transferring data to Beocat ==&lt;br /&gt;
Transferring data to Beocat can be done through a variety of ways, we offer documentation on a few of them:&lt;br /&gt;
* [[LinuxBasics]] - Under the 'Transferring files (SCP or SFTP)' section, we have information regarding SCP and SFTP implementation.&lt;br /&gt;
* [[Globus]] - Instructions on transferring files using [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''.&lt;br /&gt;
* [[OpenOnDemand]] - We offer GUI based file management through OpenOnDemand&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] - We also offer the ability to transfer data to and from OneDrive&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you email us at beocat@cs.ksu.edu please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In the future, Beocat will be moving towards Kstates central TDX ticketing system, please expect a change in how we handle user support in summer/fall of 2025.&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1075</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=1075"/>
		<updated>2025-04-10T18:41:08Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* How do I get help? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of RHEL Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
* We provide a short description of Beocat for the uses of a proposal or teaching here: [[ProposalDescription|Beocat Info]]&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
==== Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] ====&lt;br /&gt;
==== Read about  [[Installed software]] and languages ====&lt;br /&gt;
==== Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] ====&lt;br /&gt;
==== Run Interactive Jobs! [[OpenOnDemand]] ====&lt;br /&gt;
==== [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] ====&lt;br /&gt;
&lt;br /&gt;
==== Big Data course on Beocat! [[BigDataOnBeocat]] ====&lt;br /&gt;
==== Interested in Web-Based computational biology research? Check out [[GalaxyDocs|Galaxy!]] ====&lt;br /&gt;
==== Looking to utilize the NRP (Nautilus cluster) namespace? Check out [[Nautilus|Nautilus on Beocat]] ====&lt;br /&gt;
&lt;br /&gt;
== Transferring data to Beocat ==&lt;br /&gt;
Transferring data to Beocat can be done through a variety of ways, we offer documentation on a few of them:&lt;br /&gt;
* [[LinuxBasics]] - Under the 'Transferring files (SCP or SFTP)' section, we have information regarding SCP and SFTP implementation.&lt;br /&gt;
* [[Globus]] - Instructions on transferring files using [https://www.globus.org/ Globus Connect] using the endpoint ''Beocat filesystem (new)''.&lt;br /&gt;
* [[OpenOnDemand]] - We offer GUI based file management through OpenOnDemand&lt;br /&gt;
* [[Onedrive Data Transfer|Transfer Data to and from your OneDrive]] - We also offer the ability to transfer data to and from OneDrive&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre style=&amp;quot;font-weight: bold;&amp;quot;&amp;gt;&lt;br /&gt;
Again, when you email us at beocat@cs.ksu.edu please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat. In general, users contribute nodes to Beocat (aka the &amp;quot;Condo&amp;quot; model), to which their research group has priority access, in addition to elevated general priority for the rest of Beocat. If jobs from other researchers are occupying the node, Slurm will automatically halt and reschedule those jobs immediately to allow contributor access. Unused CPU time on the node is available for other Beocat users.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the ACCESS program.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[ACCESS|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our remote resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=1074</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=1074"/>
		<updated>2025-04-09T18:51:57Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Java */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java'&lt;br /&gt;
&lt;br /&gt;
You can load the default version of Java that we offer with &amp;quot;module load Java&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once you have loaded a Java module, you can use this module to interact with Java how you would normally on your own machine. &lt;br /&gt;
&lt;br /&gt;
Below Is an quick example on how to load the Java module, then compile and run a quick java program that will print input from execution (which in this case is the current working directory). &lt;br /&gt;
&lt;br /&gt;
For reference, here is our java &amp;quot;Main.java&amp;quot; file. Your java filename must match the class name (meaning your file does not have to be called &amp;quot;Main&amp;quot;, just make sure these names match).  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
public class Main {&lt;br /&gt;
  static void main (String[] args) {&lt;br /&gt;
    System.out.println(args[0]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
First we need to load the Java module. At the time of writing, the default Java module is &amp;quot;Java/11.0.20&amp;quot;. So we can load that like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load Java&lt;br /&gt;
#or we can load it like this:&lt;br /&gt;
module load Java/11.0.20&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now we need to compile our &amp;quot;Main.java&amp;quot; file into a class file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
javac Main.java&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will produce a file called  &amp;quot;Main.class&amp;quot;. Note that &amp;quot;Main&amp;quot; will be whatever you named your file.&lt;br /&gt;
Now, we can execute the file and give it something to print. In this case, I am going to print the working directory.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
java Main $PWD&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Which has an output of wherever I ran this from in the terminal.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/homes/nathanrwells&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From here you can put this command inside of a slurm submit script to send off to the compute cluster just like you would with any other bash command. Note that you will need to recompile your &amp;quot;$filename.java&amp;quot; each time you make changes to it, otherwise when you execute the program, nothing will change. &lt;br /&gt;
&lt;br /&gt;
Optionally, you can do all of this compilation and execution inside of your slurm submit script since the files need to have the same name. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
#SBATCH --time=0:10:00&lt;br /&gt;
&lt;br /&gt;
module load Java&lt;br /&gt;
javac $filename.java&lt;br /&gt;
java Main $PWD&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Making sure to replace $filename with the name of your file.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that their [https://docs.python.org/3/library/venv.html documentation] has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# or you could use 'python -m venv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
python -m venv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that have not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=1073</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=1073"/>
		<updated>2025-04-03T20:45:12Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Java */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java'&lt;br /&gt;
&lt;br /&gt;
You can load the default version of Java that we offer with &amp;quot;module load Java&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once you have loaded a Java module, you can use this module to interact with Java how you would normally on your own machine. &lt;br /&gt;
&lt;br /&gt;
Below Is an quick example on how to load the Java module, then compile and run a quick java program that will print input from execution (which in this case is the current working directory). &lt;br /&gt;
&lt;br /&gt;
For reference, here is our java &amp;quot;Main.java&amp;quot; file. Your java filename must match the class name (meaning your file does not have to be called &amp;quot;Main&amp;quot;, just make sure these names match).  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
public class Main {&lt;br /&gt;
  static void main (String{} args) {&lt;br /&gt;
    System.out.println(args[0]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
First we need to load the Java module. At the time of writing, the default Java module is &amp;quot;Java/11.0.20&amp;quot;. So we can load that like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load Java&lt;br /&gt;
#or we can load it like this:&lt;br /&gt;
module load Java/11.0.20&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now we need to compile our &amp;quot;Main.java&amp;quot; file into a class file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
javac Main.java&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will produce a file called  &amp;quot;Main.class&amp;quot;. Note that &amp;quot;Main&amp;quot; will be whatever you named your file.&lt;br /&gt;
Now, we can execute the file and give it something to print. In this case, I am going to print the working directory.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
java Main $PWD&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Which has an output of wherever I ran this from in the terminal.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/homes/nathanrwells&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From here you can put this command inside of a slurm submit script to send off to the compute cluster just like you would with any other bash command. Note that you will need to recompile your &amp;quot;$filename.java&amp;quot; each time you make changes to it, otherwise when you execute the program, nothing will change. &lt;br /&gt;
&lt;br /&gt;
Optionally, you can do all of this compilation and execution inside of your slurm submit script since the files need to have the same name. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
#SBATCH --time=0:10:00&lt;br /&gt;
&lt;br /&gt;
module load Java&lt;br /&gt;
javac $filename.java&lt;br /&gt;
java Main $PWD&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Making sure to replace $filename with the name of your file.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that their [https://docs.python.org/3/library/venv.html documentation] has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# or you could use 'python -m venv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
python -m venv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that have not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=1072</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=1072"/>
		<updated>2025-04-03T20:43:00Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Java */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java'&lt;br /&gt;
&lt;br /&gt;
You can load the default version of Java that we offer with &amp;quot;module load Java&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once you have loaded a Java module, you can use this module to interact with Java how you would normally on your own machine. &lt;br /&gt;
&lt;br /&gt;
Below Is an quick example on how to load the Java module, then compile and run a quick java program that will print input from execution (which in this case is the current working directory). &lt;br /&gt;
&lt;br /&gt;
For reference, here is our java &amp;quot;Main.java&amp;quot; file. Your java filename must match the class name (meaning your file does not have to be called &amp;quot;Main&amp;quot;, just make sure these names match).  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
public class Main {&lt;br /&gt;
  static void main (String{} args) {&lt;br /&gt;
    System.out.println(args[0]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
First we need to load the Java module. At the time of writing, the default Java module is &amp;quot;Java/11.0.20&amp;quot;. So we can load that like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load Java&lt;br /&gt;
#or we can load it like this:&lt;br /&gt;
module load Java/11.0.20&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now we need to compile our &amp;quot;Main.java&amp;quot; file into a class file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
javac Main.java&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will produce a file called  &amp;quot;Main.class&amp;quot;. Note that &amp;quot;Main&amp;quot; will be whatever you named your file.&lt;br /&gt;
Now, we can execute the file and give it something to print. In this case, I am going to print the working directory.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
java Main $PWD&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Which has an output of wherever I ran this from in the terminal.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/homes/nathanrwells&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From here you can put this command inside of a slurm submit script to send off to the compute cluster just like you would with any other bash command. Note that you will need to recompile your &amp;quot;%filename.java&amp;quot; each time you make changes to it, otherwise when you execute the program, nothing will change. &lt;br /&gt;
&lt;br /&gt;
Optionally, you can do all of this compilation and execution inside of your slurm submit script since the files need to have the same name. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
#SBATCH --time=0:10:00&lt;br /&gt;
&lt;br /&gt;
module load Java&lt;br /&gt;
javac $filename.java&lt;br /&gt;
java Main $PWD&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Making sure to replace $filename with the name of your file.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that their [https://docs.python.org/3/library/venv.html documentation] has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# or you could use 'python -m venv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
python -m venv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that have not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=1071</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=1071"/>
		<updated>2025-04-03T20:42:02Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Java */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java'&lt;br /&gt;
&lt;br /&gt;
You can load the default version of Java that we offer with &amp;quot;module load Java&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once you have loaded a Java module, you can use this module to interact with Java how you would normally on your own machine. &lt;br /&gt;
&lt;br /&gt;
Below Is an quick example on how to load the Java module, then compile and run a quick java program that will print input from execution (which in this case is the current working directory). &lt;br /&gt;
&lt;br /&gt;
For reference, here is our java &amp;quot;Main.java&amp;quot; file. Your java filename must match the class name (meaning your file does not have to be called &amp;quot;Main&amp;quot;, just make sure these names match).  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
public class Main {&lt;br /&gt;
  static void main (String{} args) {&lt;br /&gt;
    System.out.println(args[0]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
First we need to load the Java module. At the time of writing, the default Java module is &amp;quot;Java/11.0.20&amp;quot;. So we can load that like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load Java&lt;br /&gt;
#or we can load it like this:&lt;br /&gt;
module load Java/11.0.20&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now we need to compile our &amp;quot;Main.java&amp;quot; file into a class file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
javac Main.java&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will produce a file called  &amp;quot;Main.class&amp;quot;. Note that &amp;quot;Main&amp;quot; will be whatever you named your file.&lt;br /&gt;
Now, we can execute the file and give it something to print. In this case, I am going to print the working directory.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
java Main $PWD&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Which has an output of wherever I ran this from in the terminal.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/homes/nathanrwells&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From here you can put this command inside of a slurm submit script to send off to the compute cluster just like you would with any other bash command. Note that you will need to recompile your &amp;quot;%filename.java&amp;quot; each time you make changes to it, otherwise when you execute the program, nothing will change. &lt;br /&gt;
&lt;br /&gt;
Optionally, you can do all of this compilation and execution inside of your slurm submit script since the files need to have the same name. &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
#SBATCH --time=0:10:00&lt;br /&gt;
&lt;br /&gt;
module load Java&lt;br /&gt;
javac $filename.java&lt;br /&gt;
java Main $PWD&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that their [https://docs.python.org/3/library/venv.html documentation] has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# or you could use 'python -m venv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
python -m venv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that have not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=1070</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=1070"/>
		<updated>2025-04-03T20:32:57Z</updated>

		<summary type="html">&lt;p&gt;Nathanrwells: /* Java */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Module Availability ==&lt;br /&gt;
Most people will be just fine running 'module avail' to see a list of modules available on Beocat. There are a couple software packages that are only available on particular node types. For those cases, check [https://modules.beocat.ksu.edu/ our modules website.] If you are used to OpenScienceGrid computing, you may wish to take a look at how to use [[OpenScienceGrid#Using_OpenScienceGrid_modules_on_Beocat|their modules.]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
; intel:    Intel Compiler Suite, providing Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; Intel MPI. Recently made free by Intel, we have less experience with Intel MPI than OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain/' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl/&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
Check our [https://modules.beocat.ksu.edu/ modules website] for the most up to date software availability.&lt;br /&gt;
&lt;br /&gt;
The versions mentioned below are representations of what was available at the time of writing, not necessarily what is currently available.&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module avail OpenMPI/'&lt;br /&gt;
&lt;br /&gt;
The first step to run an MPI application is to load one of the compiler toolchains that include OpenMPI.  You normally will just need to load the default version as below.  If your code needs access to nVidia GPUs you'll need the cuda version above.  Otherwise some codes are picky about what versions of the underlying GNU or Intel compilers that are needed.&lt;br /&gt;
&lt;br /&gt;
  module load foss&lt;br /&gt;
&lt;br /&gt;
If you are working with your own MPI code you will need to start by compiling it.  MPI offers &amp;lt;B&amp;gt;mpicc&amp;lt;/B&amp;gt; for compiling codes written in C, &amp;lt;B&amp;gt;mpic++&amp;lt;/B&amp;gt; for compiling C++ code, and &amp;lt;B&amp;gt;mpifort&amp;lt;/B&amp;gt; for compiling Fortran code.  You can get a complete listing of parameters to use by running them with the &amp;lt;B&amp;gt;--help&amp;lt;/B&amp;gt; parameter.  Below are some examples of compiling with each.&lt;br /&gt;
&lt;br /&gt;
  mpicc --help&lt;br /&gt;
  mpicc -o my_code.x my_code.c&lt;br /&gt;
  mpic++ -o my_code.x my_code.cc&lt;br /&gt;
  mpifort -o my_code.x my_code.f&lt;br /&gt;
&lt;br /&gt;
In each case above, you can name the executable file whatever you want (I chose &amp;lt;T&amp;gt;my_code.x&amp;lt;/I&amp;gt;).  It is common to use different optimization levels, for example, but those may depend on which compiler toolchain you choose.  Some are based on the Intel compilers so you'd need to use  optimizations for the underlying icc or ifort compilers they call, and some are GNU based so you'd use compiler optimizations for gcc or gfortran.&lt;br /&gt;
&lt;br /&gt;
We have many MPI codes in our modules that you simply need to load before using.  Below is an example of loading and running Gromacs which is an MPI based code to simulate large numbers of atoms classically.&lt;br /&gt;
&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
&lt;br /&gt;
This loads the Gromacs modules and sets all the paths so you can run the scalar version &amp;lt;B&amp;gt;gmx&amp;lt;/B&amp;gt; or the MPI version &amp;lt;B&amp;gt;gmx_mpi&amp;lt;/B&amp;gt;.  Below is a sample job script for running a complete Gromacs simulation.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --mem=120G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module reset&lt;br /&gt;
  module load GROMACS&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Running Gromacs on $HOSTNAME&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  export OMP_NUM_THREADS=1&lt;br /&gt;
  time mpirun -x OMP_NUM_THREADS=1 gmx_mpi mdrun -nsteps 500000 -ntomp 1 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  &lt;br /&gt;
  echo &amp;quot;Finished run on $SLURM_NTASKS $HOSTNAME cores&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;mpirun&amp;lt;/B&amp;gt; will run your job on all cores requested which in this case is 4 cores on a single node.  You will often just need to guess at the memory size for your code, then check on the memory usage with &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt; and adjust the memory in future jobs.&lt;br /&gt;
&lt;br /&gt;
I prefer to put a &amp;lt;B&amp;gt;module reset&amp;lt;/B&amp;gt; in my scripts then manually load the modules needed to insure each run is using the modules it needs.  If you don't do this when you submit a job script it will simply use the modules you currently have loaded which is fine too.&lt;br /&gt;
&lt;br /&gt;
I also like to put a &amp;lt;B&amp;gt;time&amp;lt;/B&amp;gt; command in front of each part of the script that can use significant amounts of time.  This way I can track the amount of time used in each section of the job script.  This can prove very useful if your job script copies large data files around at the start, for example, allowing you to see how much time was used for each stage of the job if it runs longer than expected.&lt;br /&gt;
&lt;br /&gt;
The OMP_NUM_THREADS environment variable is set to 1 and passed to the MPI system to insure that each MPI task only uses 1 thread.  There are some MPI codes that are also multi-threaded, so this insures that this particular code uses the cores allocated to it in the manner we want.&lt;br /&gt;
&lt;br /&gt;
Once you have your job script ready, submit it using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command as below where the job script is in the file &amp;lt;I&amp;gt;sb.gromacs&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  sbatch sb.gromacs&lt;br /&gt;
&lt;br /&gt;
You should then monitor your job as it goes through the queue and starts running using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  You code will also generate an output file, usually of the form &amp;lt;I&amp;gt;slurm-#######.out&amp;lt;/I&amp;gt; where the 7 # signs are the 7 digit job ID number.  If you need to cancel your job use &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; with the 7 digit job ID number.&lt;br /&gt;
&lt;br /&gt;
   scancel #######&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
You can see what versions of R we provide with 'module avail R/'&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --mem-per-cpu=4G&lt;br /&gt;
# Now we tell Slurm how long we expect our work to take: 15 minutes (D-HH:MM:SS)&lt;br /&gt;
#SBATCH --time=0-00:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module reset&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can monitor your jobs using &amp;lt;B&amp;gt;kstat --me&amp;lt;/B&amp;gt;.  The output of your job will be in a slurm-#.out file where '#' is the 7 digit job ID number for your job.&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
You can see what versions of Java we support with 'module avail Java'&lt;br /&gt;
&lt;br /&gt;
You can load the default version of Java that we offer with &amp;quot;module load Java&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Once you have loaded a Java module, you can use this module to interact with Java how you would normally on your own machine. &lt;br /&gt;
&lt;br /&gt;
Below Is an quick exmaple on how to load the Java module, then compile and run a quick java program that will print input from the execution. &lt;br /&gt;
&lt;br /&gt;
For reference, here is our java &amp;quot;Main.java&amp;quot; file. Your java filename must match the class name (meaning your file does not have to be called &amp;quot;Main&amp;quot;, just make sure these names match).  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
public class Main {&lt;br /&gt;
  static void main (String{} args) {&lt;br /&gt;
    System.out.println(args[0]);&lt;br /&gt;
  }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
First we need to load the Java module. At the time of writing, the default Java module is &amp;quot;Java/11.0.20&amp;quot;. So we can load that like this:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load Java&lt;br /&gt;
#or we can load it like this:&lt;br /&gt;
module load Java/11.0.20&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now we need to compile our &amp;quot;Main.java&amp;quot; file into a class file.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
javac Main.java&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will produce a file called  &amp;quot;Main.class&amp;quot;. Note that &amp;quot;Main&amp;quot; will be whatever you named your file.&lt;br /&gt;
Now, we can execute the file and give it something to print. In this case, I am going to print the working directory.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
java Main $PWD&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Which has an output of wherever I ran this from in the terminal.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
/homes/nathanrwells&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
You can see what versions of Python we support with 'module avail Python/'. Note: Running this does not load a Python module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python (pick a version from the 'module avail Python/' list)&lt;br /&gt;
module load Python/SOME_VERSION_THAT_YOU_PICKED_FROM_THE_LIST&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that their [https://docs.python.org/3/library/venv.html documentation] has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# or you could use 'python -m venv test'&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, TensorFlow, or Jupyter&lt;br /&gt;
# if you don't use '--system-site-packages' then the virtual environment is completely isolated from our other provided packages and everything it needs it will have to build and install within itself.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/THE_SAME_VERSION_YOU_USED_TO_CREATE_YOUR_ENVIRONMENT_ABOVE&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using MPI with Python within a job ====&lt;br /&gt;
&lt;br /&gt;
We're going to load the SciPy-bundle module, as that has mpi4py available within it.&lt;br /&gt;
&lt;br /&gt;
You check the available versions and load one that uses the python version you would like.&lt;br /&gt;
 module avail SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
Here is a simple job script using MPI with Python&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load SciPy-bundle&lt;br /&gt;
&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
mpirun python ~/path/to/your/mpi/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [https://www.tensorflow.org/ TensorFlow] ===&lt;br /&gt;
TensorFlow provided by pip is often completely broken on any system that is not running a recent version of Ubuntu. Beocat (and most HPC systems) does not use Ubuntu. As such, we provide TensorFlow modules for you to load.&lt;br /&gt;
&lt;br /&gt;
You can see what versions of TensorFlow we support with 'module avail TensorFlow/'. Note: Running this does not load a TensorFlow module, it just shows you a list of the ones that are available.&lt;br /&gt;
&lt;br /&gt;
If you need other python libraries that we do not have installed, you should use [https://docs.python.org/3/library/venv.html python -m venv] to setup a virtual python environment in your home directory. This will let you install python libraries as you please.&lt;br /&gt;
&lt;br /&gt;
We document creating a virtual environment [[#Setting up your virtual environment|above]]. You can skip loading the python module, as loading TensorFlow will load the correct version of python module behind the scenes. The singular change you need to make is to use the '--system-site-packages' when creating the virtual environment.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
python -m venv --system-site-packages test&lt;br /&gt;
# using the '--system-site-packages' allows the virtual environment to make use of python libraries we have already installed&lt;br /&gt;
# particularly useful if you're going to use our SciPy-Bundle, or TensorFlow&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Jupyter ===&lt;br /&gt;
[https://jupyter.org/ Jupyter] is a framework for creating and running reusable &amp;quot;notebooks&amp;quot; for scientific computing. It runs Python code by default. Normally, it is meant to be used in an interactive manner. Interactive codes can be limiting and/or problematic when used in a cluster environment. We have an example submit script available [https://gitlab.beocat.ksu.edu/Admin-Public/ondemand/job_templates/-/tree/master/Jupyter_Notebook here] to help you transition from an OpenOnDemand interactive job using Jupyter to a non-interactive job.&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir spark-test&lt;br /&gt;
cd spark-test&lt;br /&gt;
cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You will need to set up a python virtual environment and load the &amp;lt;B&amp;gt;nltk&amp;lt;/B&amp;gt; package &lt;br /&gt;
before you run the first time.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
mkdir -p ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
python -m venv --system-site-packages spark-test&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
pip install nltk&lt;br /&gt;
deactivate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To run the sample code interactively, load the Python and Spark modules,&lt;br /&gt;
source your python virtual environment, change to the sample directory, fire up pyspark, &lt;br /&gt;
then execute the sample code.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
cd ~/spark-test&lt;br /&gt;
pyspark&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=shakespeare&lt;br /&gt;
#SBATCH --mem=10G&lt;br /&gt;
#SBATCH --time=01:00:00&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
# Load Spark and Python (version 3 here)&lt;br /&gt;
module load Spark&lt;br /&gt;
source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
&lt;br /&gt;
spark-submit shakespeare.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=python&amp;gt;&lt;br /&gt;
# If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
try:&lt;br /&gt;
   sc&lt;br /&gt;
except NameError:&lt;br /&gt;
   from pyspark import SparkConf, SparkContext&lt;br /&gt;
   conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
   sc = SparkContext(conf = conf)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
To use perl with threads, out a newer version, you can load it with the module command. To see what versions of perl we provide, you can use 'module avail Perl/'&lt;br /&gt;
&lt;br /&gt;
==== Installing Perl Modules ====&lt;br /&gt;
&lt;br /&gt;
The easiest way to install Perl modules is by using &amp;lt;B&amp;gt;cpanm&amp;lt;/B&amp;gt;.&lt;br /&gt;
Below is an example of installing the Perl module &amp;lt;I&amp;gt;Term::ANSIColor&amp;lt;/I&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
module load Perl&lt;br /&gt;
cpanm -i Term::ANSIColor&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 CPAN: LWP::UserAgent loaded ok (v6.39)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/01mailrc.txt.gz&lt;br /&gt;
 CPAN: YAML loaded ok (v1.29)&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/authors/01mailrc.txt.gz'&lt;br /&gt;
 CPAN: Compress::Zlib loaded ok (v2.084)&lt;br /&gt;
 ............................................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/02packages.details.txt.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/02packages.details.txt.gz'&lt;br /&gt;
   Database was generated on Mon, 09 Mar 2020 20:41:03 GMT&lt;br /&gt;
 .............&lt;br /&gt;
   New CPAN.pm version (v2.27) available.&lt;br /&gt;
   [Currently running version is v2.22]&lt;br /&gt;
   You might want to try&lt;br /&gt;
     install CPAN&lt;br /&gt;
     reload cpan&lt;br /&gt;
   to both upgrade CPAN.pm and run the new version without leaving&lt;br /&gt;
   the current session.&lt;br /&gt;
 ...............................................................DONE&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/modules/03modlist.data.gz&lt;br /&gt;
 Reading '/homes/mozes/.cpan/sources/modules/03modlist.data.gz'&lt;br /&gt;
 DONE&lt;br /&gt;
 Writing /homes/mozes/.cpan/Metadata&lt;br /&gt;
 Running install for module 'Term::ANSIColor'&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 CPAN: Digest::SHA loaded ok (v6.02)&lt;br /&gt;
 Fetching with LWP:&lt;br /&gt;
 http://www.cpan.org/authors/id/R/RR/RRA/CHECKSUMS&lt;br /&gt;
 Checksum for /homes/mozes/.cpan/sources/authors/id/R/RR/RRA/Term-ANSIColor-5.01.tar.gz ok&lt;br /&gt;
 CPAN: CPAN::Meta::Requirements loaded ok (v2.140)&lt;br /&gt;
 CPAN: Parse::CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: CPAN::Meta loaded ok (v2.150010)&lt;br /&gt;
 CPAN: Module::CoreList loaded ok (v5.20190522)&lt;br /&gt;
 Configuring R/RR/RRA/Term-ANSIColor-5.01.tar.gz with Makefile.PL&lt;br /&gt;
 Checking if your kit is complete...&lt;br /&gt;
 Looks good&lt;br /&gt;
 Generating a Unix-style Makefile&lt;br /&gt;
 Writing Makefile for Term::ANSIColor&lt;br /&gt;
 Writing MYMETA.yml and MYMETA.json&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl Makefile.PL -- OK&lt;br /&gt;
 Running make for R/RR/RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 cp lib/Term/ANSIColor.pm blib/lib/Term/ANSIColor.pm&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make -- OK&lt;br /&gt;
 Running make test for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 PERL_DL_NONLAZY=1 &amp;quot;/opt/software/software/Perl/5.30.0-GCCcore-8.3.0/bin/perl&amp;quot; &amp;quot;-MExtUtils::Command::MM&amp;quot; &amp;quot;-MTest::Harness&amp;quot; &amp;quot;-e&amp;quot; &amp;quot;undef *Test::Harness::Switches; test_harness(0, 'blib/lib', 'blib/arch')&amp;quot; t/*/*.t&lt;br /&gt;
 t/docs/pod-coverage.t ....... skipped: POD coverage tests normally skipped&lt;br /&gt;
 t/docs/pod-spelling.t ....... skipped: Spelling tests only run for author&lt;br /&gt;
 t/docs/pod.t ................ skipped: POD syntax tests normally skipped&lt;br /&gt;
 t/docs/spdx-license.t ....... skipped: SPDX identifier tests normally skipped&lt;br /&gt;
 t/docs/synopsis.t ........... skipped: Synopsis syntax tests normally skipped&lt;br /&gt;
 t/module/aliases-env.t ...... ok&lt;br /&gt;
 t/module/aliases-func.t ..... ok&lt;br /&gt;
 t/module/basic.t ............ ok&lt;br /&gt;
 t/module/basic256.t ......... ok&lt;br /&gt;
 t/module/eval.t ............. ok&lt;br /&gt;
 t/module/stringify.t ........ ok&lt;br /&gt;
 t/module/true-color.t ....... ok&lt;br /&gt;
 t/style/coverage.t .......... skipped: Coverage tests only run for author&lt;br /&gt;
 t/style/critic.t ............ skipped: Coding style tests only run for author&lt;br /&gt;
 t/style/minimum-version.t ... skipped: Minimum version tests normally skipped&lt;br /&gt;
 t/style/obsolete-strings.t .. skipped: Obsolete strings tests normally skipped&lt;br /&gt;
 t/style/strict.t ............ skipped: Strictness tests normally skipped&lt;br /&gt;
 t/taint/basic.t ............. ok&lt;br /&gt;
 All tests successful.&lt;br /&gt;
 Files=18, Tests=430,  7 wallclock secs ( 0.21 usr  0.08 sys +  3.41 cusr  1.15 csys =  4.85 CPU)&lt;br /&gt;
 Result: PASS&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make test -- OK&lt;br /&gt;
 Running make install for RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
 Manifying 1 pod document&lt;br /&gt;
 Installing /homes/mozes/perl5/lib/perl5/Term/ANSIColor.pm&lt;br /&gt;
 Installing /homes/mozes/perl5/man/man3/Term::ANSIColor.3&lt;br /&gt;
 Appending installation info to /homes/mozes/perl5/lib/perl5/x86_64-linux-thread-multi/perllocal.pod&lt;br /&gt;
   RRA/Term-ANSIColor-5.01.tar.gz&lt;br /&gt;
   /usr/bin/make install  -- OK&lt;br /&gt;
&lt;br /&gt;
===== When things go wrong =====&lt;br /&gt;
Some perl modules fail to realize they shouldn't be installed globally. Usually, you'll notice this when they try to run 'sudo' something. Unfortunately we do not grant sudo access to anyone other then Beocat system administrators. Usually, this can be worked around by putting the following in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file (at the bottom). Once this is in place, you should log out and log back in.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
PATH=&amp;quot;/homes/${USER}/perl5/bin${PATH:+:${PATH}}&amp;quot;; export PATH;&lt;br /&gt;
PERL5LIB=&amp;quot;/homes/${USER}/perl5/lib/perl5${PERL5LIB:+:${PERL5LIB}}&amp;quot;;&lt;br /&gt;
export PERL5LIB;&lt;br /&gt;
PERL_LOCAL_LIB_ROOT=&amp;quot;/homes/${USER}/perl5${PERL_LOCAL_LIB_ROOT:+:${PERL_LOCAL_LIB_ROOT}}&amp;quot;;&lt;br /&gt;
export PERL_LOCAL_LIB_ROOT;&lt;br /&gt;
PERL_MB_OPT=&amp;quot;--install_base \&amp;quot;/homes/${USER}/perl5\&amp;quot;&amp;quot;; export PERL_MB_OPT;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell sbatch how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
'module avail Octave/'&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Symbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module reset&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single floating user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL/&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;br /&gt;
&lt;br /&gt;
== Loading multiple modules ==&lt;br /&gt;
modules, when loaded, will stay loaded for the duration of your session until they are unloaded.&lt;br /&gt;
&lt;br /&gt;
; You can load multiple pieces of software with one module load command. : module load iompi iomkl&lt;br /&gt;
&lt;br /&gt;
; You can unload all software : module reset&lt;br /&gt;
&lt;br /&gt;
; If you see output from a module load command that looks like ''&amp;quot;The following have been reloaded with a version change&amp;quot;'' you likely have tried to load two pieces of software that have not been tested together. There may be serious issues with using either pieces of software while you're in this state. Libraries missing, applications non-functional. If you encounter issues, you will want to unload all software before switching modules. : 'module reset' and then 'module load'&lt;br /&gt;
&lt;br /&gt;
== Containers ==&lt;br /&gt;
More and more science is being done within containers, these days. Sometimes referred to Docker or Kubernetes, containers allow you to package an entire software runtime platform and run that software on another computer or site with minimal fuss.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, Docker and Kubernetes are not particularly well suited to multi-user HPC environments, but that's not to say that you can't make use of these containers on Beocat.&lt;br /&gt;
&lt;br /&gt;
=== Apptainer ===&lt;br /&gt;
[https://apptainer.org/docs/user/1.2/index.html Apptainer] is a container runtime that is designed for HPC environments. It can convert docker containers to its own format, and can be used within a job on Beocat. It is a very broad topic and we've made the decision to point you to the upstream documentation, as it is much more likely that they'll have up to date and functional instructions to help you utilize containers. If you need additional assistance, please don't hesitate to reach out to us.&lt;/div&gt;</summary>
		<author><name>Nathanrwells</name></author>
	</entry>
</feed>