<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://support.beocat.ksu.edu/BeocatDocs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kylehutson</id>
	<title>Beocat - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://support.beocat.ksu.edu/BeocatDocs/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kylehutson"/>
	<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/Docs/Special:Contributions/Kylehutson"/>
	<updated>2026-04-17T19:27:09Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.8</generator>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=823</id>
		<title>Globus</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=823"/>
		<updated>2022-08-25T02:24:46Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: /* Transferring Data using Globus */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Transferring Data using Globus ==&lt;br /&gt;
&lt;br /&gt;
[https://www.globus.org/ Globus] is a high-speed data transfer service. It is primarily used to transfer data between research institutions, but can also be used to transfer data between Beocat and a laptop or desktop. We suggest using Globus over other file transfer options if you are transferring large data sets. Globus also allows you to share data with those who do not have Beocat accounts.&lt;br /&gt;
&lt;br /&gt;
'''Update to the following''' The on-campus DTN has been shut down due to security issues. Please use the off-campus (FIONA) instructions. Also, Globus has updated their web interface so the video is out-of-date, but the basic process is unchanged.&lt;br /&gt;
&lt;br /&gt;
Beocat has two Globus servers - one on the main campus network, and one directly connected to [https://www.kanren.net/ KanREN] (essentially for our purposes, the university's Internet Service Provider). To understand which one you should be using an overview of how Beocat connects to the Internet is useful:&lt;br /&gt;
[[File:CampusNetworkOverview.png|thumb|left|Campus Network Overview - Click for a larger view]]&lt;br /&gt;
As you can see, if you are ON campus, it's faster to use the &amp;quot;DTN&amp;quot; endpoint, but if you are OFF campus, it is faster to use the &amp;quot;FIONA&amp;quot; endpoint. That being said, due to software differences, those two endpoints behave differently, and either CAN be used either on- or off-campus.&lt;br /&gt;
&lt;br /&gt;
== Video Demonstration ==&lt;br /&gt;
Rather than give dozens of screenshots, here is a video demonstrating how to use Globus to transfer files to and from Beocat&lt;br /&gt;
{{#widget:YouTube|id=D0X7x7B_wQs|width=800|height=600}}&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=819</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=819"/>
		<updated>2022-08-10T19:20:10Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added PuTTY instructions for automating Duo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If you're account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In MobaXTerm, to automatically set the Duo method to &amp;quot;push&amp;quot;, edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In PuTTY to automatically set the Duo method to &amp;quot;push&amp;quot;, expand &amp;quot;Connection&amp;quot; (if it isn't already), then click &amp;quot;Data&amp;quot;. Under Environment variables, enter &amp;quot;DUO_PASSCODE&amp;quot; beside &amp;quot;Variable&amp;quot; and &amp;quot;push&amp;quot; beside &amp;quot;Value&amp;quot;. Click the &amp;quot;Add&amp;quot; button and it will show up underneath. Be sure to go back to &amp;quot;Session&amp;quot; to save this change for PuTTY to remember this change.&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes and /scratch || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /scratch || Shared || 3.1PB shared with /bulk and /homes || cephfs || Fast shared tmp space; files not used in 30 days are automatically culled&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /scratch, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || ext4 || Good for I/O intensive jobs&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done [https://account.beocat.ksu.edu/project here]&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands.&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=818</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=818"/>
		<updated>2022-08-09T21:03:44Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Tips on automating Duo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
===Duo===&lt;br /&gt;
If you're account is Duo Enabled, you will be asked to approve ''each'' connection through Duo's push system to your smart device by default for any non-interactive protocols. If you don't have a smart device, or your smart device is not currently able to be contacted by Duo, there are options.&lt;br /&gt;
&lt;br /&gt;
====Automating Duo Method====&lt;br /&gt;
You would need to configure your connection client to send an ''Environment'' variable called &amp;lt;tt&amp;gt;DUO_PASSCODE&amp;lt;/tt&amp;gt;. Its value could be the currently valid passcode from Duo, &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; or it could be set to &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;push&amp;lt;/tt&amp;gt; will push the prompt to your smart device. &amp;lt;tt&amp;gt;phone&amp;lt;/tt&amp;gt; will have duo call your phone number to approve.&lt;br /&gt;
&lt;br /&gt;
With OpenSSH (Linux or Mac command-line), to automatically set the Duo method to &amp;quot;push&amp;quot;, use the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push ssh -o SendEnv=DUO_PASSCODE headnode.beocat.ksu.edu&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
In MobaXTerm, to automatically set the Duo method to &amp;quot;push&amp;quot;, edit your SSH session and on the &amp;quot;Advanced SSH Settings&amp;quot; tab, change the &amp;quot;Execute command&amp;quot; to &amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;DUO_PASSCODE=push bash&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Do Beocat jobs have a maximum Time Limit ==&lt;br /&gt;
Yes, there is a time limit, the scheduler will reject jobs longer than 28 days. The other side of that is that we reserve the right to a maintenance period every 14 days. Unless it is an emergency, we will give at least 2 weeks notice before these maintenance periods actually occur. Jobs 14 days or less that have started when we announce a maintenance period should be able to complete before it begins.&lt;br /&gt;
&lt;br /&gt;
With that being said, there is no guarantee that any physical piece of hardware and the software that runs on it will behave for any significant length of time. Memory, processors, disk drives can all fail with little to no warning. Software may have bugs. We have had issues with the shared filesystem that resulted in several nodes losing connectivity and forced reboots. If you can, we always recommend that you write your jobs so that they can be resumed if they get interrupted.&lt;br /&gt;
&lt;br /&gt;
{{Note|The 28 day limit can be overridden on a temporary and per-user basis provided there is enough justification|reminder|inline=1}}&lt;br /&gt;
&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 3.1PB shared with /homes and /scratch || cephfs || Slower than /homes; costs $45/TB/year&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 3.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /scratch || Shared || 3.1PB shared with /bulk and /homes || cephfs || Fast shared tmp space; files not used in 30 days are automatically culled&lt;br /&gt;
|-&lt;br /&gt;
| /fastscratch || Shared || 280TB || nfs on top of ZFS || Faster than /scratch, built with all NVME disks; files not used in 30 days are automatically culled.&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || ext4 || Good for I/O intensive jobs&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
If you're wondering why a job may have the exit status of &amp;lt;tt&amp;gt;PREEMPTED&amp;lt;/tt&amp;gt; from kstat or sacct, this is the reason.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=168:00:00). Some users still feel this is a hindrance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done [https://account.beocat.ksu.edu/project here]&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the default permissions for new files and directories created in the directory:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -d -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the permissions for the existing files and directories:&lt;br /&gt;
** &amp;lt;tt&amp;gt;setfacl -m g:$group_name:rX -R $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory. If you also want them to be able to write or modify files in that directory then use change the ':rX' to ':rwX' instead. e.g. 'setfacl -d -m g:$group_name:rwX -R $directory' for both setfacl commands.&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=733</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=733"/>
		<updated>2021-05-26T15:09:52Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Updated Freenode to Libera.chat&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of CentOS Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''beocat#beocat''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
==== Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] ====&lt;br /&gt;
==== Read about  [[Installed software]] and languages ====&lt;br /&gt;
==== Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]] ====&lt;br /&gt;
==== Run Interactive Jobs! [[OpenOnDemand]] ====&lt;br /&gt;
&lt;br /&gt;
==== Big Data course on Beocat! [[BigDataOnBeocat]] ====&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
* Running [[RSICC|RSICC codes]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [https://libera.chat/guides/connect Libera chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' and/or '''kylehutson''' in your message, and it should grab our attention. [[Special:WebChat|Available from a web browser here.]]&lt;br /&gt;
&lt;br /&gt;
For interactive assistance, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H4&amp;gt;&lt;br /&gt;
Again, when you email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu] please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/H4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
{{#widget:Twitter timeline|id=KSUBeocat|count=6}}&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat.&lt;br /&gt;
&lt;br /&gt;
== External Computing Resources ==&lt;br /&gt;
&lt;br /&gt;
We have access to supercomputing resources at other sites in the country through&lt;br /&gt;
the XSEDE (eXtreme Science and Engineering Discovery Environment) portal.&lt;br /&gt;
We have a large allocation of core-hours that can be used for testing and running&lt;br /&gt;
software, plus each user can apply for their own allocation if needed.&lt;br /&gt;
These resources can allow users to run jobs if they are not able to get enough&lt;br /&gt;
access on Beocat, but they are especially useful for when we don't have the needed&lt;br /&gt;
resources on Beocat like access to 4 TB nodes on Bridges2, or more 64-bit&lt;br /&gt;
GPUs, or Matlab licenses.  Click [[XSEDE|here]] to see what resources &lt;br /&gt;
we have access to and to get access to some directions on how to use them.&lt;br /&gt;
Then contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how to access our XSEDE resources.&lt;br /&gt;
&lt;br /&gt;
We also have free unlimited access to the Open Science Grid.&lt;br /&gt;
This is a high-throughput computing environment designed to efficiently&lt;br /&gt;
run lots of small jobs by spreading them across supercomputing systems in the&lt;br /&gt;
U.S. and Europe to use spare compute cycles donated to this project.  Beocat is&lt;br /&gt;
one of those systems that runs outside OSG jobs when our users are not fully&lt;br /&gt;
utilizing all our compute nodes.  For more information on how to get an OSG&lt;br /&gt;
account and take advantage of this resource, click [[OSG|here]].&lt;br /&gt;
For help in getting access to OSG, email [mailto:daveturner@ksu.edu Dr. Dave Turner].&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar &lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com &lt;br /&gt;
|color=711616 &lt;br /&gt;
|view=AGENDA &lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=550</id>
		<title>LinuxBasics</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=550"/>
		<updated>2020-01-31T01:01:04Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Disclaimer:''' This is a ''very'' large topic, and much too broad to be covered on a single support page. There are many other sites (yes, entire sites) which cover the topic in more detail. We'll link so some of them below. This page is meant to be just the essentials.&lt;br /&gt;
&lt;br /&gt;
== Logging in for the first time ==&lt;br /&gt;
To login to Beocat, you first need an &amp;quot;SSH Client&amp;quot;. [[wikipedia:Secure_Shell|SSH]] (short for &amp;quot;secure shell&amp;quot;) is a protocol that allows secure communication between two computers. We recommend the following.&lt;br /&gt;
* Windows&lt;br /&gt;
** [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY] is by far the most common SSH client, both for Beocat and in the world.&lt;br /&gt;
** [http://mobaxterm.mobatek.net/ MobaXterm] is a fairly new client with some nice features, such as being able to SCP/SFTP (see below), and running X (which isn't terribly useful on Beocat, but might be if you connect to other Linux hosts).&lt;br /&gt;
** [http://www.cygwin.com/ Cygwin] is for those that would rather be running Linux but are stuck on Windows. It's purely a text interface.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** OS-X has SSH a built-in application called &amp;quot;Terminal&amp;quot;. It's not great, but it will work for most Beocat users.&lt;br /&gt;
** [http://www.iterm2.com/#/section/home iTerm2] is the terminal application we prefer.&lt;br /&gt;
* Others&lt;br /&gt;
** There are [[wikipedia:Comparison_of_SSH_clients|many SSH clients]] for many different platforms available. While we don't have experience with many of these, any should be sufficient for access to Beocat.&lt;br /&gt;
&lt;br /&gt;
You'll need to connect your client (via the SSH protocol, if your client allows multiple protocols) to headnode.beocat.ksu.edu.&lt;br /&gt;
&lt;br /&gt;
For command-line tools, the command to connect is&lt;br /&gt;
 ssh ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
Your username is your [http://eid.ksu.edu K-State eID] name and the password is your eID password.&lt;br /&gt;
&lt;br /&gt;
'''Note:''' When you type your password, nothing shows up on the screen, not even asterisks.&lt;br /&gt;
&lt;br /&gt;
You'll know you are successfully logged in when you see a prompt that says&lt;br /&gt;
 (''machinename'':~) ''username''%&lt;br /&gt;
where ''machinename'' is the name of the machine you've logged into (currently either 'athena' or 'minerva') and ''username'' is your eID username&lt;br /&gt;
&lt;br /&gt;
== Transferring files (SCP or SFTP) ==&lt;br /&gt;
Usually, one of the first things people want to do is to transfer files into or out of Beocat. To do so, you need to use [[wikipedia:Secure_copy|SCP]] (secure copy) or [[wikipedia:SSH_File_Transfer_Protocol|SFTP]] (SSH FTP or Secure FTP). Again, there are multiple programs that do this.&lt;br /&gt;
* Windows&lt;br /&gt;
** Putty (see above) has PSCP and PSFTP programs (both are included if you run the installer). It is a command-line interface (CLI) rather than a graphical user interface (GUI).&lt;br /&gt;
** MobaXterm (see above) has a built-in GUI SFTP client that automatically changes the directories as you change them in your SSH session.&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] (client) has an easy-to-use GUI. Be sure to use 'SFTP' mode rather than 'FTP' mode.&lt;br /&gt;
** [http://winscp.net/eng/index.php WinSCP] is another easy-to-use GUI.&lt;br /&gt;
** Cygwin (see above) has CLI scp and sftp programs.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] is also available for OS-X.&lt;br /&gt;
** Within terminal or iTerm, you can use the 'scp' or 'sftp' programs.&lt;br /&gt;
* Linux&lt;br /&gt;
** FileZilla also has a GUI linux version, in addition to the CLI tools.&lt;br /&gt;
&lt;br /&gt;
=== Using a Command-Line Interface (CLI) ===&lt;br /&gt;
You can safely ignore this section if you're using a graphical interface (GUI). We highly recommend using a GUI when first learning how to use Beocat.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;First test case&amp;lt;/u&amp;gt;: transfer a file called myfile.txt in your current folder to your home directory on Beocat. For these examples, I use bold text to show what you type and plain text to show Beocat's response&lt;br /&gt;
&lt;br /&gt;
Using SCP:&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Note the colon at the end of the 'scp' line.&lt;br /&gt;
&lt;br /&gt;
Using SFTP&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/kylehutson/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
SFTP is interactive, so this is a two-step process. First, you connect to Beocat, then you transfer the file. As long as the system gives the &amp;lt;code&amp;gt;sftp&amp;gt; &amp;lt;/code&amp;gt; prompt, you are in the sftp program, and you will remain there until you type 'exit'.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Second test case:&amp;lt;/u&amp;gt; transfer a file called myfile.txt in your current folder to a diretory named 'mydirectory' under your home directory on Beocat.&lt;br /&gt;
&lt;br /&gt;
Here we run into one of the problems with scp - there is no easy way of creating 'mydirectory' if it doesn't already exist. If it does not already exist, you must login via ssh (as seen above) and create the directory using the 'mkdir' command (see Common Linux Commands) below.&lt;br /&gt;
&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 &lt;br /&gt;
An alternative version. If the colon is immediately followed by a slash, the directory name is taken from the root, rather than your home directory. So, given that your home directory on Beocat is /homes/''username'', we could instead type&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:/homes/''username''/mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 sftp ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''mkdir mydirectory'''&lt;br /&gt;
 [Note, if this directory already exists, you will get the response &amp;quot;Couldn't create directory: Failure&amp;quot;]&lt;br /&gt;
 sftp&amp;gt; '''cd mydirectory'''&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/''username''/mydirectory/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''quit'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Third test case:&amp;lt;/u&amp;gt; copy myfile.txt from your home directory on Beocat to your current folder.&lt;br /&gt;
&lt;br /&gt;
Using scp:&lt;br /&gt;
 scp ''username''@headnode.beocat.ksu.edu:myfile.txt .&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected toheadnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''get myfile.txt'''&lt;br /&gt;
 Fetching /homes/''username''/myfile.txt to myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
== Basic Linux Commands ==&lt;br /&gt;
Again, this guide is very limited, mostly limited to directory navigation and basic file commands. [http://www.ee.surrey.ac.uk/Teaching/Unix/ Here] is a pretty decent tutorial if you want to dig deeper. If you want more, entire books have been written on the subject.&lt;br /&gt;
&lt;br /&gt;
=== The Lingo ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!''Term''&lt;br /&gt;
!''Definition''&lt;br /&gt;
|-&lt;br /&gt;
|Directory&lt;br /&gt;
|A &amp;quot;Folder&amp;quot; in Windows or OS-X terms. A location where files or other directories are stored. The current directory is sometimes represented as `.` and the parent directory can be referenced as `..`&lt;br /&gt;
|-&lt;br /&gt;
|Shell&lt;br /&gt;
|The interface or environment under which you can run commands. There is a section below on shells&lt;br /&gt;
|-&lt;br /&gt;
|SSH&lt;br /&gt;
|Secure Shell. A protocol that encrypts data and can give access to another system, usually by a username and password&lt;br /&gt;
|-&lt;br /&gt;
|SCP&lt;br /&gt;
|Secure Copy. Copying to or from a remote system using part of SSH&lt;br /&gt;
|-&lt;br /&gt;
|path&lt;br /&gt;
|The list of directories which are searched when you type the name of a program. There is a section below on this&lt;br /&gt;
|-&lt;br /&gt;
|ownership&lt;br /&gt;
|Every file and directory has an user and a group attached to it, called its owners. These affect permissions.&lt;br /&gt;
|-&lt;br /&gt;
|permissions&lt;br /&gt;
|The ability to read, write, and/or execute a file. Permissions are based on ownership&lt;br /&gt;
|-&lt;br /&gt;
|switches&lt;br /&gt;
|Modifiers to a command-line program, usually in the form of -(letter) or --``(word). Several examples are given below, such as the '-a' on the 'ls' command&lt;br /&gt;
|-&lt;br /&gt;
|pipes and redirects&lt;br /&gt;
|Changes the input (often called 'stdin') and/or output (often called stdout) to a program or a file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Linux Command Line Cheat Sheet ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+File System Navigation&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
!''Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|pwd&lt;br /&gt;
|&amp;quot;Print working directory&amp;quot;, Where am I now?&lt;br /&gt;
|&amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;/homes/mozes&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls&lt;br /&gt;
|Lists files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -lh&lt;br /&gt;
|Lists files and folders with perms size and ownership&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -lh ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;-rw-r--r--  1 mozes    mozes_users   1    Jul 13  2011 NewFile&lt;br /&gt;
drwxr-xr-x  9 mozes    mozes_users   9.0K Apr 12  2010 NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -a&lt;br /&gt;
|Lists all files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -a ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;. .. .bashrc .bash_profile .tcshrc NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cd&lt;br /&gt;
|Changes directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ..&lt;br /&gt;
|Changes to parent directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ..&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd -&lt;br /&gt;
|Changes to the previous directory you were in&lt;br /&gt;
|&amp;lt;code&amp;gt;cd -&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ~&lt;br /&gt;
|Changes to your home directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ~&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Working with files&lt;br /&gt;
|-&lt;br /&gt;
!Command'&lt;br /&gt;
!What it does&lt;br /&gt;
!Example Usage'&lt;br /&gt;
!Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|file&lt;br /&gt;
|Identifies the type of object a file is&lt;br /&gt;
|&amp;lt;code&amp;gt;file NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile: a /usr/bin/python script, ASCII text executable&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cat&lt;br /&gt;
|Prints the contents of one or more files&lt;br /&gt;
|&amp;lt;code&amp;gt;cat NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;This is line one&lt;br /&gt;
This is line two&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp&lt;br /&gt;
|copy a file&lt;br /&gt;
|&amp;lt;code&amp;gt;cp OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp -i&lt;br /&gt;
|copy a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -i OldFile NewFile}&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp -r&lt;br /&gt;
|copy a directory, including contents&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -r OldFolder NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv&lt;br /&gt;
|move, or rename, a file&lt;br /&gt;
|&amp;lt;code&amp;gt;mv OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv -i&lt;br /&gt;
|move, or rename, a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;mv -i OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm&lt;br /&gt;
|remove a file&lt;br /&gt;
|&amp;lt;code&amp;gt;rm NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rm -i&lt;br /&gt;
|remove a file, ask to be sure (useful with -r)&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -i NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;remove NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm -r&lt;br /&gt;
|remove a direcory and its contents&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -r NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mkdir&lt;br /&gt;
|creates a directory&lt;br /&gt;
|&amp;lt;code&amp;gt;mkdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rmdir&lt;br /&gt;
|removes an empty directory&lt;br /&gt;
|&amp;lt;code&amp;gt;rmdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|touch&lt;br /&gt;
|creates an empty file&lt;br /&gt;
|&amp;lt;code&amp;gt;touch TempFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Finding files and directories with [http://linux.die.net/man/1/find find]&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt;&lt;br /&gt;
| finds all files and folders within &amp;lt;directory&amp;gt;&lt;br /&gt;
| find ~/&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '&amp;lt;filename&amp;gt;'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that match &amp;lt;filename&amp;gt;&lt;br /&gt;
| find ~/ -iname 'hello.qsub'&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '*&amp;lt;partialmatch&amp;gt;*'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that partially match &amp;lt;partialmatch&amp;gt;&lt;br /&gt;
| find ~/ -iname '*.qsub*'&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Other useful commands include &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt; followed by a command name above will give you the manual page for the specified command full of many other useful options for the command. &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt; will give you an overview of the processes currently being run on the host you are connected to. &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt; allows you to page through files and see their contents using &amp;lt;PgUp&amp;gt; and &amp;lt;PgDn&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Editing Text Files ===&lt;br /&gt;
If you're new to Linux, the editor you will probably want to use is 'nano'. It works much the same as 'Notepad' in Windows or 'textedit' on OS-X. Note that you cannot use your mouse to change position within the document as you can with your local computer. You must use the arrow keys, instead.&lt;br /&gt;
&lt;br /&gt;
So, if I wanted to edit my .bashrc (as shown below), and I was already in my home directory (see above), I would type&lt;br /&gt;
 nano .bashrc&lt;br /&gt;
&lt;br /&gt;
While in nano, there is a list of actions you can take at the bottom of the screen. &amp;lt;Ctrl&amp;gt; is represented by a caret (`^`), so to exit (labeled as `^`X at the bottom of the screen), I would type &amp;lt;ctrl&amp;gt;-x. This action prompts you whether you want to save and exit (Y), lose changes and exit (N), or cancel and go back to editing (&amp;lt;ctrl&amp;gt;-c).&lt;br /&gt;
&lt;br /&gt;
If you do a significant amount of text editing in Linux, you'll probably want to switch to a more powerful editor, such as vim. The usage of vim is beyond the scope of this document. It is not at all intuitive to the beginning user, but with a little practice it becomes a much faster way of editing text files. If you're interested in using vim, [http://www.openvim.com/tutorial.html there is a nice tutorial here].&lt;br /&gt;
&lt;br /&gt;
=== Shells ===&lt;br /&gt;
==== What is a Shell? ====&lt;br /&gt;
In this case, I don't believe I can do a better job explaining shells than [[wikipedia:Shell_(computing)|this]].&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
Elsewhere at Kansas State University, the default Shell is set to tcsh. tcsh stands for &amp;quot;TENEX C SHell.&amp;quot; It is considered a replacement for csh and uses many of the same features. If you have experience with either csh or tcsh you'll probably feel right at home. This was the default shell until July of 2013. If you had an account before then, it is probably still tcsh.&lt;br /&gt;
&lt;br /&gt;
But what if you don't want or like tcsh, what can you do? Well, we have other shells available of Beocat as well.&lt;br /&gt;
==== bash ====&lt;br /&gt;
[http://www.gnu.org/software/bash/ Bash] seems to be the defacto standard shell in most Linux installs today. Bash is common and probably what most of you are used to. As of July 2013, bash is our new default shell. All new users will be set to bash initially. [https://software-carpentry.org/ Software Carpentry] teaches classes on several subjects specifically targeting researchers, including the bash shell. Their documentation is all freely available. [http://swcarpentry.github.io/shell-novice/ Here is a link to their excellent tutorial on using BASH.] Most of our documentation assumes you are using BASH.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;bash configuration files:&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This section gets into some minutiae with the way our job scheduler interacts with bash. If you're trying to solve a problem, read on, otherwise you can probably skip this section.&lt;br /&gt;
&lt;br /&gt;
Bash has 3 user configurable configuration files, &amp;lt;code&amp;gt;~/.bashrc ~/.bash_profile&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_logout&amp;lt;/code&amp;gt;. We'll look at the two more relevant ones &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Bash has 3 ways of looking at things, '''login''', '''interactive''', or '''none'''.&lt;br /&gt;
&lt;br /&gt;
Normally what happens is that shells that are '''interactive''' read &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;, shells that are '''login''' read &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. '''none''' shells read neither.&lt;br /&gt;
&lt;br /&gt;
sbatch jobs are '''login''', srun jobs are '''login+interactive''', logging into Beocat in a way that you can enter commands is '''login+interactive'''. There are very few cases that you will get '''none'''. For any session that isn't '''interactive''', your sourced files cannot output anything to the screen, or else it can break scp or sftp file transfers.&lt;br /&gt;
&lt;br /&gt;
If they are ''quiet'' statements, and you want them in all shells, you can put them in your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. If they are not ''quiet'' or they output ''anything'' to the screen, you must put them in your &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
[http://zsh.sourceforge.net/ zsh] is an alternative to bash and tcsh. It tends to support more complex features than either of the other two while using a syntax remarkably similar to bash. Unless specifically noted, when we specify '''Change your shell to bash''', &amp;lt;tt&amp;gt;zsh&amp;lt;/tt&amp;gt; should work as well.&lt;br /&gt;
&lt;br /&gt;
==== Changing Shells ====&lt;br /&gt;
Previously, we gave you the option of using a &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; to modify your shell. This is no longer supported, if you have issues with your shell/paths/environment variables we will ask you to delete your &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; file and change your shell via the method below.&lt;br /&gt;
&lt;br /&gt;
You can change your shell is via &amp;lt;code&amp;gt;chsh&amp;lt;/code&amp;gt; on either of the headnodes (athena/minerva). This does not need to be re-done if you've changed to it to your preferred shell in the past.&lt;br /&gt;
&lt;br /&gt;
Use the appropriate of the following three lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
/usr/local/bin/chsh -s bash &amp;amp;&amp;amp; bash -l&lt;br /&gt;
/usr/local/bin/chsh -s tcsh &amp;amp;&amp;amp; tcsh -l&lt;br /&gt;
/usr/local/bin/chsh -s zsh &amp;amp;&amp;amp; zsh -l&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Changing your PATH ===&lt;br /&gt;
Typically, you don't have to change your PATH, but it is useful to know what your PATH is and what it does. The PATH is the list of directories which are searched when you type the name of a program. Note that by default the current directory is NOT included in the path, so if you were wanting to run a program called MyProgram in the current directory, you could NOT simply type 'MyProgram', you would instead type &amp;lt;code&amp;gt;'./MyProgram'&amp;lt;/code&amp;gt; (where the '.' represents the current directory).&lt;br /&gt;
&lt;br /&gt;
To find your PATH, we need to identify which shell you are using. If you do not know, run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
ps | awk '/sh/ {print $4}'&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .tcshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
setenv PATH /usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== bash ====&lt;br /&gt;
You'll need to edit a file in your home directory called .bashrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=/usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .zshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=&amp;quot;/usr/local/bin:$PATH&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Ownership and Permissions ===&lt;br /&gt;
Every file and directory has a user and group associated with it. You can view ownership information by using the '-l' switch on ls. By default on Beocat, files you create have a user ownership of your username (i.e., your eID) and a group ownership of your username_users. So, if I were logged in as 'myusername' and I had a single file in my home directory called MyProgram, the result of typing 'ls -l' would be something like this:&lt;br /&gt;
 total 0&lt;br /&gt;
 -rwxr-x--- 1 myusername myusername_users 79 May 31  2011 MyProgram&lt;br /&gt;
This tells us several things.&lt;br /&gt;
* The first column ('-rwxr-x---') is permissions (covered below)&lt;br /&gt;
* The second column ('1') is the number of links to this file. You can safely ignore this (unless you're both masochistic and interested in filesystem details)&lt;br /&gt;
* The third column ('myusername') shows the user ownership&lt;br /&gt;
* The fourth column ('myusername_users') shows the group ownership&lt;br /&gt;
* The fifth column ('79') gives the size of the file in bytes&lt;br /&gt;
* The next columns ('May 31  2011'), as you have probably guessed, gives the date the file was last changed&lt;br /&gt;
* The final column ('MyProgram') is the name of the file&lt;br /&gt;
&lt;br /&gt;
So why is this interesting to us? Because whenever things ''don't'' work, it's usually because of file ownership or permissions. Looking at these often gives us some useful diagnostic information.&lt;br /&gt;
&lt;br /&gt;
The permissions field shows us who has permissions to do what with this file. It is always 10 characters. The first character (-) is usually either a '-' for a regular file or a 'd' for a directory. The next 9 characters are broken into three groups of three, with each group showing read (r), write (w), and execute (x) permissions for the owner, group, and world, in that order.&lt;br /&gt;
* The first group (rwx) shows permissions for the owner (myusername). The owner here has read, write, and execute permissions&lt;br /&gt;
* The next group (r-x) shows permissions for the group (myusername_users). The group here has read and execute permissions, but cannot write.&lt;br /&gt;
* The last group (---) shows permissions for the rest of the world. The world has no permissions to read, write, or execute.&lt;br /&gt;
&lt;br /&gt;
When you create a shell script with a text editor, and sometimes when you copy programs to Beocat via SCP, the execute flag is not set. The permissions string may look more like (-rw-r--r--). To change this, you need to give yourself permission to execute this program. This is done with the 'chmod' (change mode) command. 'chmod' can have a long and confusing syntax, but since by far the most common problem is to give yourself execute permissions, here is the command to change that:&lt;br /&gt;
 chmod u+x MyProgram&lt;br /&gt;
This changes the permissions so that the user ('u', i.e., the owner) adds ('+') execute permission ('x').&lt;br /&gt;
&lt;br /&gt;
For more complex ownership or permissions changes, please feel free to contact the Beocat staff.&lt;br /&gt;
&lt;br /&gt;
=== Manual (man) pages ===&lt;br /&gt;
Most commands have a complex set of switches that will modify the amount or type of information they display. To find out what switches are available, or how a program expects data, you can use the manual pages by typing &amp;quot;`man` ''command''&amp;quot;. Using one of the most common Linux commands, take a look the output of 'man ls'. It shows that it has over 50 switches available, ranging from which files to include, to how to display file sizes, to sort order and more. (I'm not pasting it here, because it's over 200 lines long!) To navigate a 'manpage', use the up-arrow and down-arrow keys. Press 'q' to quit.&lt;br /&gt;
&lt;br /&gt;
=== Pipes and Redirects ===&lt;br /&gt;
Typically a Linux program takes data from the keyboard and outputs data to the screen. In Unix and Linux terminology, the keyboard is the default 'stdin' (pronounced &amp;quot;standard in&amp;quot;) and the screen is the default 'stdout' (pronounced &amp;quot;standard out&amp;quot;). Many times, we want to take data from somewhere else (like a file, or the output of another program) and send it to yet another location. These redirectors are:&lt;br /&gt;
{|&lt;br /&gt;
|cmd &amp;gt; filename&lt;br /&gt;
|Redirect output from cmd to filename ||&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;gt;&amp;gt; filename&lt;br /&gt;
|Redirect output from cmd and append to filename&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;lt; filename&lt;br /&gt;
|Redirect input from cmd to filename&lt;br /&gt;
|-&lt;br /&gt;
| cmd1 &amp;amp;#124; cmd2&lt;br /&gt;
| Use the output from cmd1 as the input to cmd2&lt;br /&gt;
|}&lt;br /&gt;
Here is a quick example. Let's say I have a thousands of files in a directory, and I want a list of those that end in '.sh'&lt;br /&gt;
'ls' by itself scrolls so far I can't see even a fraction of them. So, I redirect the output to a file&lt;br /&gt;
 ls &amp;gt; ~/filelist.txt&lt;br /&gt;
That gives me all the files in the current folder and saves them in my home directory in 'filelist.txt'.&lt;br /&gt;
A quick look through the file in my favorite editor tells me this is still going to take too long, so I need another step. The 'grep' program is a commonly-used program to perform pattern matching. The syntax of 'grep' is beyond the scope of this document, but take my word for it that&lt;br /&gt;
 grep '\.sh$'&lt;br /&gt;
will return all lines that end in .sh.&lt;br /&gt;
&lt;br /&gt;
We can now redirect the input from grep to the file we just created:&lt;br /&gt;
 grep '\.sh$' &amp;lt; ~/filelist.txt&lt;br /&gt;
Great! We now have our list. However, we wanted to save this as filelist.txt, and instead we have another list that we have to copy-and-paste. Instead of redirecting to a file, we'll use the vertical bar '|' (which we often term a &amp;quot;pipe&amp;quot;) to send the output of one command to another.&lt;br /&gt;
 ls | grep '\.sh$' &amp;gt; ~/filelist.txt&lt;br /&gt;
This time the output of 'ls' is ''not'' redirected to a file, but is redirected to the next command (grep).  The output of grep (which is all our .sh files) instead of being sent to the screen is redirected to the file ~/filelist.txt.&lt;br /&gt;
&lt;br /&gt;
This example is a very simple demonstration of how pipes and redirects work. Many more examples with complex structures can be found at http://www.ibm.com/developerworks/linux/library/l-lpic1-v3-103-4/index.html&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=504</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=504"/>
		<updated>2019-12-10T03:16:04Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: /* Help! My job isn't going to finish in the time I specified. Can I change the time requirement? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 2.1PB shared with /bulk and /scratch || cephfs || Slower than /homes; very old files are culled automatically&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 2.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /scratch || Shared || 2.1PB shared with /bulk and /homes || cephfs || Fast shared tmp space; files not used in 30 days are automatically culled&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || ext4 || Good for I/O intensive jobs&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=24:00:00). Some users still feel this is a hinderance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens to my data when I leave K-State? ==&lt;br /&gt;
First of all, although we use eid credentials, we are not tied in with K-State's central IT policies which apply to employees or students leaving the university. As long as you keep your eid password current, you still have access to Beocat. Once we deem your data to be &amp;quot;stale&amp;quot;, we will archive your data and disable your account. We have no written policy on when we do this, because we only do so as necessity dictates, but generally speaking if you have any data which is modified for less than two years will not be marked as stale. If your account is disabled for this reason, you will have to apply for a new account and un-archive your data.&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done [https://account.beocat.ksu.edu/project here]&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change the group to the name assigned by the Beocat admins&lt;br /&gt;
** &amp;lt;tt&amp;gt;chgrp -R $group_name $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the directory writeable and sticky for the group&lt;br /&gt;
** &amp;lt;tt&amp;gt;chmod g+ws $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Change your umask to 002 (there will probably be a setting for it in your file transfer utilities, also). This step needs to be done by all group members.&lt;br /&gt;
** &amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
[ &amp;quot;$GROUPS&amp;quot; == &amp;quot;$(getent group $group_name | cut -d: -f3)&amp;quot; ] || newgrp $group_name&lt;br /&gt;
umask 002&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; It needs to go at the very end of your &amp;lt;tt&amp;gt;.bashrc&amp;lt;/tt&amp;gt; file.&lt;br /&gt;
&lt;br /&gt;
* Finally logout and log back in&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=502</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=502"/>
		<updated>2019-12-04T19:46:03Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Clarified how to create a new project.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 2.1PB shared with /bulk and /scratch || cephfs || Slower than /homes; very old files are culled automatically&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 2.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /scratch || Shared || 2.1PB shared with /bulk and /homes || cephfs || Fast shared tmp space; files not used in 30 days are automatically culled&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || ext4 || Good for I/O intensive jobs&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SLURM_JOB_ID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable:1&amp;quot; or &amp;quot;killable:0&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (--gres=killable:1) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=24:00:00). Some users still feel this is a hinderance, so we created a way to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Disabling killable ===&lt;br /&gt;
Specifying --gres=killable:0 will tell us to not mark your job as killable.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SlurmBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run sbatch for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;--time=0-10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In fact, even the administrators cannot change the run-time requirement of a particular job. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Take a look at our documentation on [[Installed_software#Perl|Perl]]&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;--gres=fabric:ib:1&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens to my data when I leave K-State? ==&lt;br /&gt;
First of all, although we use eid credentials, we are not tied in with K-State's central IT policies which apply to employees or students leaving the university. As long as you keep your eid password current, you still have access to Beocat. Once we deem your data to be &amp;quot;stale&amp;quot;, we will archive your data and disable your account. We have no written policy on when we do this, because we only do so as necessity dictates, but generally speaking if you have any data which is modified for less than two years will not be marked as stale. If your account is disabled for this reason, you will have to apply for a new account and un-archive your data.&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area.&lt;br /&gt;
&lt;br /&gt;
If you do not have a project, send a request via email to beocat@cs.ksu.edu. Note that these projects are generally reserved for tenure-track faculty and with a single project per eID.&lt;br /&gt;
&lt;br /&gt;
If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
'''Note:''' The &amp;lt;tt&amp;gt;$group_name&amp;lt;/tt&amp;gt; variable in the commands below needs to be replaced with the lower case name of your project. Membership of the projects can be done [https://account.beocat.ksu.edu/project here]&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change the group to the name assigned by the Beocat admins&lt;br /&gt;
** &amp;lt;tt&amp;gt;chgrp -R $group_name $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the directory writeable and sticky for the group&lt;br /&gt;
** &amp;lt;tt&amp;gt;chmod g+ws $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Change your umask to 002 (there will probably be a setting for it in your file transfer utilities, also). This step needs to be done by all group members.&lt;br /&gt;
** &amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
[ &amp;quot;$GROUPS&amp;quot; == &amp;quot;$(getent group $group_name | cut -d: -f3)&amp;quot; ] || newgrp $group_name&lt;br /&gt;
umask 002&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt; It needs to go at the very end of your &amp;lt;tt&amp;gt;.bashrc&amp;lt;/tt&amp;gt; file.&lt;br /&gt;
&lt;br /&gt;
* Finally logout and log back in&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man sbatch&amp;lt;/code&amp;gt;'. This will bring up the manual for sbatch.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=498</id>
		<title>LinuxBasics</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=498"/>
		<updated>2019-09-09T22:07:27Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added link to SW carpentry's tutorial&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Disclaimer:''' This is a ''very'' large topic, and much too broad to be covered on a single support page. There are many other sites (yes, entire sites) which cover the topic in more detail. We'll link so some of them below. This page is meant to be just the essentials.&lt;br /&gt;
&lt;br /&gt;
== Logging in for the first time ==&lt;br /&gt;
To login to Beocat, you first need an &amp;quot;SSH Client&amp;quot;. [[wikipedia:Secure_Shell|SSH]] (short for &amp;quot;secure shell&amp;quot;) is a protocol that allows secure communication between two computers. We recommend the following.&lt;br /&gt;
* Windows&lt;br /&gt;
** [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY] is by far the most common SSH client, both for Beocat and in the world.&lt;br /&gt;
** [http://mobaxterm.mobatek.net/ MobaXterm] is a fairly new client with some nice features, such as being able to SCP/SFTP (see below), and running X (which isn't terribly useful on Beocat, but might be if you connect to other Linux hosts).&lt;br /&gt;
** [http://www.cygwin.com/ Cygwin] is for those that would rather be running Linux but are stuck on Windows. It's purely a text interface.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** OS-X has SSH a built-in application called &amp;quot;Terminal&amp;quot;. It's not great, but it will work for most Beocat users.&lt;br /&gt;
** [http://www.iterm2.com/#/section/home iTerm2] is the terminal application we prefer.&lt;br /&gt;
* Others&lt;br /&gt;
** There are [[wikipedia:Comparison_of_SSH_clients|many SSH clients]] for many different platforms available. While we don't have experience with many of these, any should be sufficient for access to Beocat.&lt;br /&gt;
&lt;br /&gt;
You'll need to connect your client (via the SSH protocol, if your client allows multiple protocols) to headnode.beocat.ksu.edu.&lt;br /&gt;
&lt;br /&gt;
For command-line tools, the command to connect is&lt;br /&gt;
 ssh ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
Your username is your [http://eid.ksu.edu K-State eID] name and the password is your eID password.&lt;br /&gt;
&lt;br /&gt;
'''Note:''' When you type your password, nothing shows up on the screen, not even asterisks.&lt;br /&gt;
&lt;br /&gt;
You'll know you are successfully logged in when you see a prompt that says&lt;br /&gt;
 (''machinename'':~) ''username''%&lt;br /&gt;
where ''machinename'' is the name of the machine you've logged into (currently either 'athena' or 'minerva') and ''username'' is your eID username&lt;br /&gt;
&lt;br /&gt;
== Transferring files (SCP or SFTP) ==&lt;br /&gt;
Usually, one of the first things people want to do is to transfer files into or out of Beocat. To do so, you need to use [[wikipedia:Secure_copy|SCP]] (secure copy) or [[wikipedia:SSH_File_Transfer_Protocol|SFTP]] (SSH FTP or Secure FTP). Again, there are multiple programs that do this.&lt;br /&gt;
* Windows&lt;br /&gt;
** Putty (see above) has PSCP and PSFTP programs (both are included if you run the installer). It is a command-line interface (CLI) rather than a graphical user interface (GUI).&lt;br /&gt;
** MobaXterm (see above) has a built-in GUI SFTP client that automatically changes the directories as you change them in your SSH session.&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] (client) has an easy-to-use GUI. Be sure to use 'SFTP' mode rather than 'FTP' mode.&lt;br /&gt;
** [http://winscp.net/eng/index.php WinSCP] is another easy-to-use GUI.&lt;br /&gt;
** Cygwin (see above) has CLI scp and sftp programs.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] is also available for OS-X.&lt;br /&gt;
** Within terminal or iTerm, you can use the 'scp' or 'sftp' programs.&lt;br /&gt;
* Linux&lt;br /&gt;
** FileZilla also has a GUI linux version, in additon to the CLI tools.&lt;br /&gt;
&lt;br /&gt;
=== Using a Command-Line Interface (CLI) ===&lt;br /&gt;
You can safely ignore this section if you're using a graphical interface (GUI). We highly recommend using a GUI when first learning how to use Beocat.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;First test case&amp;lt;/u&amp;gt;: transfer a file called myfile.txt in your current folder to your home directory on Beocat. For these examples, I use bold text to show what you type and plain text to show Beocat's response&lt;br /&gt;
&lt;br /&gt;
Using SCP:&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Note the colon at the end of the 'scp' line.&lt;br /&gt;
&lt;br /&gt;
Using SFTP&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/kylehutson/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
SFTP is interactive, so this is a two-step process. First, you connect to Beocat, then you transfer the file. As long as the system gives the &amp;lt;code&amp;gt;sftp&amp;gt; &amp;lt;/code&amp;gt; prompt, you are in the sftp program, and you will remain there until you type 'exit'.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Second test case:&amp;lt;/u&amp;gt; transfer a file called myfile.txt in your current folder to a diretory named 'mydirectory' under your home directory on Beocat.&lt;br /&gt;
&lt;br /&gt;
Here we run into one of the problems with scp - there is no easy way of creating 'mydirectory' if it doesn't already exist. If it does not already exist, you must login via ssh (as seen above) and create the directory using the 'mkdir' command (see Common Linux Commands) below.&lt;br /&gt;
&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 &lt;br /&gt;
An alternative version. If the colon is immediately followed by a slash, the directory name is taken from the root, rather than your home directory. So, given that your home directory on Beocat is /homes/''username'', we could instead type&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:/homes/''username''/mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 sftp ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''mkdir mydirectory'''&lt;br /&gt;
 [Note, if this directory already exists, you will get the response &amp;quot;Couldn't create directory: Failure&amp;quot;]&lt;br /&gt;
 sftp&amp;gt; '''cd mydirectory'''&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/''username''/mydirectory/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''quit'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Third test case:&amp;lt;/u&amp;gt; copy myfile.txt from your home directory on Beocat to your current folder.&lt;br /&gt;
&lt;br /&gt;
Using scp:&lt;br /&gt;
 scp ''username''@headnode.beocat.ksu.edu:myfile.txt .&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected toheadnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''get myfile.txt'''&lt;br /&gt;
 Fetching /homes/''username''/myfile.txt to myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
== Basic Linux Commands ==&lt;br /&gt;
Again, this guide is very limited, mostly limited to directory navigation and basic file commands. [http://www.ee.surrey.ac.uk/Teaching/Unix/ Here] is a pretty decent tutorial if you want to dig deeper. If you want more, entire books have been written on the subject.&lt;br /&gt;
&lt;br /&gt;
=== The Lingo ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!''Term''&lt;br /&gt;
!''Definition''&lt;br /&gt;
|-&lt;br /&gt;
|Directory&lt;br /&gt;
|A &amp;quot;Folder&amp;quot; in Windows or OS-X terms. A location where files or other directories are stored. The current directory is sometimes represented as `.` and the parent directory can be referenced as `..`&lt;br /&gt;
|-&lt;br /&gt;
|Shell&lt;br /&gt;
|The interface or environment under which you can run commands. There is a section below on shells&lt;br /&gt;
|-&lt;br /&gt;
|SSH&lt;br /&gt;
|Secure Shell. A protocol that encrypts data and can give access to another system, usually by a username and password&lt;br /&gt;
|-&lt;br /&gt;
|SCP&lt;br /&gt;
|Secure Copy. Copying to or from a remote system using part of SSH&lt;br /&gt;
|-&lt;br /&gt;
|path&lt;br /&gt;
|The list of directories which are searched when you type the name of a program. There is a section below on this&lt;br /&gt;
|-&lt;br /&gt;
|ownership&lt;br /&gt;
|Every file and directory has an user and a group attached to it, called its owners. These affect permissions.&lt;br /&gt;
|-&lt;br /&gt;
|permissions&lt;br /&gt;
|The ability to read, write, and/or execute a file. Permissions are based on ownership&lt;br /&gt;
|-&lt;br /&gt;
|switches&lt;br /&gt;
|Modifiers to a command-line program, usually in the form of -(letter) or --``(word). Several examples are given below, such as the '-a' on the 'ls' command&lt;br /&gt;
|-&lt;br /&gt;
|pipes and redirects&lt;br /&gt;
|Changes the input (often called 'stdin') and/or output (often called stdout) to a program or a file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Linux Command Line Cheat Sheet ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+File System Navigation&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
!''Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|pwd&lt;br /&gt;
|&amp;quot;Print working directory&amp;quot;, Where am I now?&lt;br /&gt;
|&amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;/homes/mozes&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls&lt;br /&gt;
|Lists files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -lh&lt;br /&gt;
|Lists files and folders with perms size and ownership&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -lh ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;-rw-r--r--  1 mozes    mozes_users   1    Jul 13  2011 NewFile&lt;br /&gt;
drwxr-xr-x  9 mozes    mozes_users   9.0K Apr 12  2010 NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -a&lt;br /&gt;
|Lists all files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -a ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;. .. .bashrc .bash_profile .tcshrc NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cd&lt;br /&gt;
|Changes directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ..&lt;br /&gt;
|Changes to parent directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ..&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd -&lt;br /&gt;
|Changes to the previous directory you were in&lt;br /&gt;
|&amp;lt;code&amp;gt;cd -&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ~&lt;br /&gt;
|Changes to your home directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ~&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Working with files&lt;br /&gt;
|-&lt;br /&gt;
!Command'&lt;br /&gt;
!What it does&lt;br /&gt;
!Example Usage'&lt;br /&gt;
!Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|file&lt;br /&gt;
|Identifies the type of object a file is&lt;br /&gt;
|&amp;lt;code&amp;gt;file NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile: a /usr/bin/python script, ASCII text executable&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cat&lt;br /&gt;
|Prints the contents of one or more files&lt;br /&gt;
|&amp;lt;code&amp;gt;cat NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;This is line one&lt;br /&gt;
This is line two&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp&lt;br /&gt;
|copy a file&lt;br /&gt;
|&amp;lt;code&amp;gt;cp OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp -i&lt;br /&gt;
|copy a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -i OldFile NewFile}&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp -r&lt;br /&gt;
|copy a directory, including contents&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -r OldFolder NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv&lt;br /&gt;
|move, or rename, a file&lt;br /&gt;
|&amp;lt;code&amp;gt;mv OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv -i&lt;br /&gt;
|move, or rename, a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;mv -i OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm&lt;br /&gt;
|remove a file&lt;br /&gt;
|&amp;lt;code&amp;gt;rm NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rm -i&lt;br /&gt;
|remove a file, ask to be sure (useful with -r)&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -i NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;remove NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm -r&lt;br /&gt;
|remove a direcory and its contents&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -r NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mkdir&lt;br /&gt;
|creates a directory&lt;br /&gt;
|&amp;lt;code&amp;gt;mkdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rmdir&lt;br /&gt;
|removes an empty directory&lt;br /&gt;
|&amp;lt;code&amp;gt;rmdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|touch&lt;br /&gt;
|creates an empty file&lt;br /&gt;
|&amp;lt;code&amp;gt;touch TempFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Finding files and directories with [http://linux.die.net/man/1/find find]&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt;&lt;br /&gt;
| finds all files and folders within &amp;lt;directory&amp;gt;&lt;br /&gt;
| find ~/&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '&amp;lt;filename&amp;gt;'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that match &amp;lt;filename&amp;gt;&lt;br /&gt;
| find ~/ -iname 'hello.qsub'&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '*&amp;lt;partialmatch&amp;gt;*'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that partially match &amp;lt;partialmatch&amp;gt;&lt;br /&gt;
| find ~/ -iname '*.qsub*'&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Other useful commands include &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt; followed by a command name above will give you the manual page for the specified command full of many other useful options for the command. &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt; will give you an overview of the processes currently being run on the host you are connected to. &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt; allows you to page through files and see their contents using &amp;lt;PgUp&amp;gt; and &amp;lt;PgDn&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Editing Text Files ===&lt;br /&gt;
If you're new to Linux, the editor you will probably want to use is 'nano'. It works much the same as 'Notepad' in Windows or 'textedit' on OS-X. Note that you cannot use your mouse to change position within the document as you can with your local computer. You must use the arrow keys, instead.&lt;br /&gt;
&lt;br /&gt;
So, if I wanted to edit my .bashrc (as shown below), and I was already in my home directory (see above), I would type&lt;br /&gt;
 nano .bashrc&lt;br /&gt;
&lt;br /&gt;
While in nano, there is a list of actions you can take at the bottom of the screen. &amp;lt;Ctrl&amp;gt; is represented by a caret (`^`), so to exit (labeled as `^`X at the bottom of the screen), I would type &amp;lt;ctrl&amp;gt;-x. This action prompts you whether you want to save and exit (Y), lose changes and exit (N), or cancel and go back to editing (&amp;lt;ctrl&amp;gt;-c).&lt;br /&gt;
&lt;br /&gt;
If you do a significant amount of text editing in Linux, you'll probably want to switch to a more powerful editor, such as vim. The usage of vim is beyond the scope of this document. It is not at all intuitive to the beginning user, but with a little practice it becomes a much faster way of editing text files. If you're interested in using vim, [http://www.openvim.com/tutorial.html|there is a nice tutorial here].&lt;br /&gt;
&lt;br /&gt;
=== Shells ===&lt;br /&gt;
==== What is a Shell? ====&lt;br /&gt;
In this case, I don't believe I can do a better job explaining shells than [[wikipedia:Shell_(computing)|this]].&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
Elsewhere at Kansas State University, the default Shell is set to tcsh. tcsh stands for &amp;quot;TENEX C SHell.&amp;quot; It is considered a replacement for csh and uses many of the same features. If you have experience with either csh or tcsh you'll probably feel right at home. This was the default shell until July of 2013. If you had an account before then, it is probably still tcsh.&lt;br /&gt;
&lt;br /&gt;
But what if you don't want or like tcsh, what can you do? Well, we have other shells available of Beocat as well.&lt;br /&gt;
==== bash ====&lt;br /&gt;
[http://www.gnu.org/software/bash/ Bash] seems to be the defacto standard shell in most Linux installs today. Bash is common and probably what most of you are used to. As of July 2013, bash is our new default shell. All new users will be set to bash initially. [https://software-carpentry.org/ Software Carpentry] teaches classes on several subjects specifically targeting researchers, including the bash shell. Their documentation is all freely available. [http://swcarpentry.github.io/shell-novice/ Here is a link to their excellent tutorial on using BASH.] Most of our documentation assumes you are using BASH.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;bash configuration files:&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This section gets into some minutiae with the way our job scheduler interacts with bash. If you're trying to solve a problem, read on, otherwise you can probably skip this section.&lt;br /&gt;
&lt;br /&gt;
Bash has 3 user configurable configuration files, &amp;lt;code&amp;gt;~/.bashrc ~/.bash_profile&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_logout&amp;lt;/code&amp;gt;. We'll look at the two more relevant ones &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Bash has 3 ways of looking at things, '''login''', '''interactive''', or '''none'''.&lt;br /&gt;
&lt;br /&gt;
Normally what happens is that shells that are '''interactive''' read &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;, shells that are '''login''' read &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. '''none''' shells read neither.&lt;br /&gt;
&lt;br /&gt;
sbatch jobs are '''login''', srun jobs are '''login+interactive''', logging into Beocat in a way that you can enter commands is '''login+interactive'''. There are very few cases that you will get '''none'''. For any session that isn't '''interactive''', your sourced files cannot output anything to the screen, or else it can break scp or sftp file transfers.&lt;br /&gt;
&lt;br /&gt;
If they are ''quiet'' statements, and you want them in all shells, you can put them in your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. If they are not ''quiet'' or they output ''anything'' to the screen, you must put them in your &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
[http://zsh.sourceforge.net/ zsh] is an alternative to bash and tcsh. It tends to support more complex features than either of the other two while using a syntax remarkably similar to bash. Unless specifically noted, when we specify '''Change your shell to bash''', &amp;lt;tt&amp;gt;zsh&amp;lt;/tt&amp;gt; should work as well.&lt;br /&gt;
&lt;br /&gt;
==== Changing Shells ====&lt;br /&gt;
Previously, we gave you the option of using a &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; to modify your shell. This is no longer supported, if you have issues with your shell/paths/environment variables we will ask you to delete your &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; file and change your shell via the method below.&lt;br /&gt;
&lt;br /&gt;
You can change your shell is via &amp;lt;code&amp;gt;chsh&amp;lt;/code&amp;gt; on either of the headnodes (athena/minerva). This does not need to be re-done if you've changed to it to your preferred shell in the past.&lt;br /&gt;
&lt;br /&gt;
Use the appropriate of the following three lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
/usr/local/bin/chsh -s bash &amp;amp;&amp;amp; bash -l&lt;br /&gt;
/usr/local/bin/chsh -s tcsh &amp;amp;&amp;amp; tcsh -l&lt;br /&gt;
/usr/local/bin/chsh -s zsh &amp;amp;&amp;amp; zsh -l&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Changing your PATH ===&lt;br /&gt;
Typically, you don't have to change your PATH, but it is useful to know what your PATH is and what it does. The PATH is the list of directories which are searched when you type the name of a program. Note that by default the current directory is NOT included in the path, so if you were wanting to run a program called MyProgram in the current directory, you could NOT simply type 'MyProgram', you would instead type &amp;lt;code&amp;gt;'./MyProgram'&amp;lt;/code&amp;gt; (where the '.' represents the current directory).&lt;br /&gt;
&lt;br /&gt;
To find your PATH, we need to identify which shell you are using. If you do not know, run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
ps | awk '/sh/ {print $4}'&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .tcshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
setenv PATH /usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== bash ====&lt;br /&gt;
You'll need to edit a file in your home directory called .bashrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=/usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .zshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=&amp;quot;/usr/local/bin:$PATH&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Ownership and Permissions ===&lt;br /&gt;
Every file and directory has a user and group associated with it. You can view ownership information by using the '-l' switch on ls. By default on Beocat, files you create have a user ownership of your username (i.e., your eID) and a group ownership of your username_users. So, if I were logged in as 'myusername' and I had a single file in my home directory called MyProgram, the result of typing 'ls -l' would be something like this:&lt;br /&gt;
 total 0&lt;br /&gt;
 -rwxr-x--- 1 myusername myusername_users 79 May 31  2011 MyProgram&lt;br /&gt;
This tells us several things.&lt;br /&gt;
* The first column ('-rwxr-x---') is permissions (covered below)&lt;br /&gt;
* The second column ('1') is the number of links to this file. You can safely ignore this (unless you're both masochistic and interested in filesystem details)&lt;br /&gt;
* The third column ('myusername') shows the user ownership&lt;br /&gt;
* The fourth column ('myusername_users') shows the group ownership&lt;br /&gt;
* The fifth column ('79') gives the size of the file in bytes&lt;br /&gt;
* The next columns ('May 31  2011'), as you have probably guessed, gives the date the file was last changed&lt;br /&gt;
* The final column ('MyProgram') is the name of the file&lt;br /&gt;
&lt;br /&gt;
So why is this interesting to us? Because whenever things ''don't'' work, it's usually because of file ownership or permissions. Looking at these often gives us some useful diagnostic information.&lt;br /&gt;
&lt;br /&gt;
The permissions field shows us who has permissions to do what with this file. It is always 10 characters. The first character (-) is usually either a '-' for a regular file or a 'd' for a directory. The next 9 characters are broken into three groups of three, with each group showing read (r), write (w), and execute (x) permissions for the owner, group, and world, in that order.&lt;br /&gt;
* The first group (rwx) shows permissions for the owner (myusername). The owner here has read, write, and execute permissions&lt;br /&gt;
* The next group (r-x) shows permissions for the group (myusername_users). The group here has read and execute permissions, but cannot write.&lt;br /&gt;
* The last group (---) shows permissions for the rest of the world. The world has no permissions to read, write, or execute.&lt;br /&gt;
&lt;br /&gt;
When you create a shell script with a text editor, and sometimes when you copy programs to Beocat via SCP, the execute flag is not set. The permissions string may look more like (-rw-r--r--). To change this, you need to give yourself permission to execute this program. This is done with the 'chmod' (change mode) command. 'chmod' can have a long and confusing syntax, but since by far the most common problem is to give yourself execute permissions, here is the command to change that:&lt;br /&gt;
 chmod u+x MyProgram&lt;br /&gt;
This changes the permissions so that the user ('u', i.e., the owner) adds ('+') execute permission ('x').&lt;br /&gt;
&lt;br /&gt;
For more complex ownership or permissions changes, please feel free to contact the Beocat staff.&lt;br /&gt;
&lt;br /&gt;
=== Manual (man) pages ===&lt;br /&gt;
Most commands have a complex set of switches that will modify the amount or type of information they display. To find out what switches are available, or how a program expects data, you can use the manual pages by typing &amp;quot;`man` ''command''&amp;quot;. Using one of the most common Linux commands, take a look the output of 'man ls'. It shows that it has over 50 switches available, ranging from which files to include, to how to display file sizes, to sort order and more. (I'm not pasting it here, because it's over 200 lines long!) To navigate a 'manpage', use the up-arrow and down-arrow keys. Press 'q' to quit.&lt;br /&gt;
&lt;br /&gt;
=== Pipes and Redirects ===&lt;br /&gt;
Typically a Linux program takes data from the keyboard and outputs data to the screen. In Unix and Linux terminology, the keyboard is the default 'stdin' (pronounced &amp;quot;standard in&amp;quot;) and the screen is the default 'stdout' (pronounced &amp;quot;standard out&amp;quot;). Many times, we want to take data from somewhere else (like a file, or the output of another program) and send it to yet another location. These redirectors are:&lt;br /&gt;
{|&lt;br /&gt;
|cmd &amp;gt; filename&lt;br /&gt;
|Redirect output from cmd to filename ||&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;gt;&amp;gt; filename&lt;br /&gt;
|Redirect output from cmd and append to filename&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;lt; filename&lt;br /&gt;
|Redirect input from cmd to filename&lt;br /&gt;
|-&lt;br /&gt;
| cmd1 &amp;amp;#124; cmd2&lt;br /&gt;
| Use the output from cmd1 as the input to cmd2&lt;br /&gt;
|}&lt;br /&gt;
Here is a quick example. Let's say I have a thousands of files in a directory, and I want a list of those that end in '.sh'&lt;br /&gt;
'ls' by itself scrolls so far I can't see even a fraction of them. So, I redirect the output to a file&lt;br /&gt;
 ls &amp;gt; ~/filelist.txt&lt;br /&gt;
That gives me all the files in the current folder and saves them in my home directory in 'filelist.txt'.&lt;br /&gt;
A quick look through the file in my favorite editor tells me this is still going to take too long, so I need another step. The 'grep' program is a commonly-used program to perform pattern matching. The syntax of 'grep' is beyond the scope of this document, but take my word for it that&lt;br /&gt;
 grep '\.sh$'&lt;br /&gt;
will return all lines that end in .sh.&lt;br /&gt;
&lt;br /&gt;
We can now redirect the input from grep to the file we just created:&lt;br /&gt;
 grep '\.sh$' &amp;lt; ~/filelist.txt&lt;br /&gt;
Great! We now have our list. However, we wanted to save this as filelist.txt, and instead we have another list that we have to copy-and-paste. Instead of redirecting to a file, we'll use the vertical bar '|' (which we often term a &amp;quot;pipe&amp;quot;) to send the output of one command to another.&lt;br /&gt;
 ls | grep '\.sh$' &amp;gt; ~/filelist.txt&lt;br /&gt;
This time the output of 'ls' is ''not'' redirected to a file, but is redirected to the next command (grep).  The output of grep (which is all our .sh files) instead of being sent to the screen is redirected to the file ~/filelist.txt.&lt;br /&gt;
&lt;br /&gt;
This example is a very simple demonstration of how pipes and redirects work. Many more examples with complex structures can be found at http://www.ibm.com/developerworks/linux/library/l-lpic1-v3-103-4/index.html&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=497</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=497"/>
		<updated>2019-09-09T21:59:52Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: /* Globus */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. 'kstat -g' will show you the GPU nodes and the jobs running on them.  To request a GPU node, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; for example to request 1 GPU for your job.  You can also request a given type of GPU (kstat -g -l to show types) by using &amp;lt;tt&amp;gt;--gres=gpu:nvidia_geforce_gtx_1080_ti:1&amp;lt;/tt&amp;gt; for a 1080Ti GPU on the Wizards or Dwarves, &amp;lt;tt&amp;gt;--gres=gpu:nvidia_quadro_gp100:1&amp;lt;/tt&amp;gt; for the P100 GPUs on Wizard20-21 that are best for 64-bit codes like Vasp, or &amp;lt;tt&amp;gt;--gres=gpu:nvidia_geforce_gtx_980_ti:1&amp;lt;/tt&amp;gt; for the older 980Ti GPUs on Dwarf38-39.  Most of these GPU nodes are owned by various groups.  If you want access to GPU nodes and your group does not own any, we can add you to the &amp;lt;tt&amp;gt;--partition=ksu-gen-gpu.q&amp;lt;/tt&amp;gt; group that has priority on Dwarf38-39.  For more information on compiling CUDA code click on this [[CUDA]] link.&lt;br /&gt;
&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
''Intra''node jobs run on many cores in the same node. These jobs can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or any programming language that has the concept of ''threads''. Often, your program will need to know how many cores you want it to use, and many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--nodes=1 --cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
''Inter''node jobs can utilize many cores on one or more nodes. Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--nodes=''n'' --ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=10 --ntasks=100&amp;lt;/tt&amp;gt; will give you a total of 100 cores across 10 nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=avx2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## There is one strict rule for guaranteeing Slurm reads all of your options:&lt;br /&gt;
## Do not put *any* lines above your resource requests that aren't either:&lt;br /&gt;
##    1) blank. (no other characters)&lt;br /&gt;
##    2) comments (lines must begin with '#')&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres:gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocat@cs.ksu.edu to see how we can assist in&lt;br /&gt;
## getting your job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance,&lt;br /&gt;
and will soon be backed up nightly.  Larger files should be stored in the /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up.  The /scratch file system will soon be implemented on a Lustre file system that will provide very fast&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to the local disk on each node or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in the /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Each user also has a &amp;lt;tt&amp;gt;/bulk/''username''&amp;lt;/tt&amp;gt; directory where large files should be stored.&lt;br /&gt;
File access is the same speed as for the home directories, and the same limit of 100,000 files&lt;br /&gt;
per subdirectory applies.  There is no limit to the disk space you can use in your bulk directory,&lt;br /&gt;
but the files there will not be backed up.  They are still redundantly stored so you don't need to&lt;br /&gt;
worry about losing data to hardware failures, just don't delete something by accident. Unused files will be automatically removed after two years.&lt;br /&gt;
If you need to back up large files in the bulk directory, talk to Dan Andresen (dan@ksu.edu) about&lt;br /&gt;
purchasing some hard disks for archival storage.&lt;br /&gt;
&lt;br /&gt;
===Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /scratch file system may be faster than /bulk or /homes since each file written will access fewer disks.&lt;br /&gt;
In order to use scratch, you first need to make a directory for yourself.  &lt;br /&gt;
Scratch is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the scratch file system may get purged after 30 days.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /scratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  When we get the scratch file system working with Lustre, it may&lt;br /&gt;
end up being faster than accessing local disk so we will post the access rates for each.  Use 'kstat -l -h'&lt;br /&gt;
to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
chmod ugo+w /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory===&lt;br /&gt;
&lt;br /&gt;
By default your home directory is accessible to other users on Beocat for reading but not writing.  If you do not want others to have any&lt;br /&gt;
access to files in your home directory, you can set the permissions to restrict access to just yourself.&lt;br /&gt;
&lt;br /&gt;
 chmod go-rwx /homes/your_user_name&lt;br /&gt;
&lt;br /&gt;
This removes read, write, and execute permission to everyone but yourself.  Be aware that it may make it more difficult for us to help you out when&lt;br /&gt;
you run into problems.&lt;br /&gt;
&lt;br /&gt;
===Sharing files within your group===&lt;br /&gt;
&lt;br /&gt;
By default all your files and directories have a 'group' that is your user name followed by _users as 'ls -l' shows.&lt;br /&gt;
In my case they have the group of daveturner_users.&lt;br /&gt;
If your working group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share', changing the group&lt;br /&gt;
to ksu-cis-hpc (my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q), then changing the permissions to restrict access to &lt;br /&gt;
just that group.&lt;br /&gt;
&lt;br /&gt;
 mkdir share&lt;br /&gt;
 chgrp ksu-cis-hpc share&lt;br /&gt;
 chmod g+rx share&lt;br /&gt;
 chmod o-rwx share&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then use 'chmod g+rwx' instead.&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&lt;br /&gt;
 groups&lt;br /&gt;
&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself.&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
 cd&lt;br /&gt;
 mkdir public_html&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
We have a page here dedicated to [[Globus]]&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --arrat specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1:5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Checkpoint/Restart using DMTCP ==&lt;br /&gt;
&lt;br /&gt;
DMTCP is Distributed Multi-Threaded CheckPoint software that will checkpoint your application without modification, and&lt;br /&gt;
can be set up to automatically restart your job from the last checkpoint if for example the node you are running on fails.  &lt;br /&gt;
This has been tested successfully&lt;br /&gt;
on Beocat for some scalar and OpenMP codes, but has failed on all MPI tests so far.  We would like to encourage users to&lt;br /&gt;
try DMTCP out if their non-MPI jobs run longer than 24 hours.  If you want to try this, please contact us first since we are still&lt;br /&gt;
experimenting with DMTCP.&lt;br /&gt;
&lt;br /&gt;
The sample job submission script below shows how dmtcp_launch is used to start the application, then dmtcp_restart is used to start from a checkpoint if the job has failed and been rescheduled.&lt;br /&gt;
If you are putting this in an array script, then add the Slurm array task ID to the end of the ckeckpoint directory name&lt;br /&gt;
like &amp;lt;B&amp;gt;ckptdir=ckpt-$SLURM_ARRAY_TASK_ID&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=gromacs&lt;br /&gt;
  #SBATCH --mem=50G&lt;br /&gt;
  #SBATCH --time=24:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=4&lt;br /&gt;
  &lt;br /&gt;
  module purge&lt;br /&gt;
  module load GROMACS/2016.4-foss-2017beocatb-hybrid&lt;br /&gt;
  module load DMTCP&lt;br /&gt;
  module list&lt;br /&gt;
  &lt;br /&gt;
  ckptdir=ckpt&lt;br /&gt;
  mkdir -p $ckptdir&lt;br /&gt;
  export DMTCP_CHECKPOINT_DIR=$ckptdir&lt;br /&gt;
  &lt;br /&gt;
  if ! ls -1 $ckptdir | grep -c dmtcp_restart_script &amp;gt; /dev/null&lt;br /&gt;
  then&lt;br /&gt;
     echo &amp;quot;Using dmtcp_launch to start the app the first time&amp;quot;&lt;br /&gt;
     dmtcp_launch --no-coordinator mpirun -np 1 -x OMP_NUM_THREADS=4 gmx_mpi mdrun -nsteps 50000 -ntomp 4 -v -deffnm 1ns -c 1ns.pdb -nice 0&lt;br /&gt;
  else&lt;br /&gt;
     echo &amp;quot;Using dmtcp_restart from $ckptdir to continue from a checkpoint&amp;quot;&lt;br /&gt;
     dmtcp_restart $ckptdir/*.dmtcp&lt;br /&gt;
  fi&lt;br /&gt;
&lt;br /&gt;
You will need to run several tests to verify that DMTCP is working properly with your application.&lt;br /&gt;
First, run a short test without DMTCP and another with DMTCP with the checkpoint interval set to 5 minutes&lt;br /&gt;
by adding the line &amp;lt;B&amp;gt;export DMTCP_CHECKPOINT_INTERVAL=300&amp;lt;/B&amp;gt; to your script.  Then use &amp;lt;B&amp;gt;kstat -d 1&amp;lt;/B&amp;gt; to&lt;br /&gt;
check that the memory in both runs is close to the same.  Also use this information to calculate the time &lt;br /&gt;
that each checkpoint takes.  In most cases I've seen times less than a minute for checkpointing that will normally&lt;br /&gt;
be done once each hour.  If your application is taking more time, let us know.  Sometimes this can be sped up&lt;br /&gt;
by simply turning off compression by adding the line &amp;lt;B&amp;gt;export DMTCP_GZIP=0&amp;lt;/B&amp;gt;.  Make sure to remove the&lt;br /&gt;
line where you set the checkpoint interval to 300 seconds so that the default time of once per hour will be used.&lt;br /&gt;
&lt;br /&gt;
After verifying that your code completes using DMTCP and does not take significantly more time or memory, you&lt;br /&gt;
will need to start a run then &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; it after the first checkpoint, then resubmit the same script to make &lt;br /&gt;
sure that it restarts and runs to completion.  If you are working with an array job script, the last is to try a few&lt;br /&gt;
array tasks at once to make sure there is no conflict between the jobs.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;--gres=killable:1&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;scancel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, the group gets access to a &amp;quot;partition&amp;quot; giving you priority on those nodes.&lt;br /&gt;
&lt;br /&gt;
In most situations, the scheduler will automatically add those priority partitions to the jobs as submitted. You should not need to include a partition list in your job submission.&lt;br /&gt;
&lt;br /&gt;
There are currently just a few exceptions that we will not automatically add:&lt;br /&gt;
* ksu-chem-mri.q&lt;br /&gt;
* ksu-gen-gpu.q&lt;br /&gt;
* ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
To determine the partitions you have access to, run &amp;lt;tt&amp;gt;sinfo -hso '%P'&amp;lt;/tt&amp;gt;&lt;br /&gt;
That will return a list that looks something like this:&lt;br /&gt;
 killable.q&lt;br /&gt;
 batch.q&lt;br /&gt;
 ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
If you have access to those any of the non-automatic partitions, and have need of the resources in that partition, you can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=ksu-gen-highmem.q&lt;br /&gt;
&lt;br /&gt;
== Graphical Applications ==&lt;br /&gt;
Some applications are graphical and need to have some graphical input/output. We currently accomplish this with X11 forwarding&lt;br /&gt;
=== Connecting with an X11 client ===&lt;br /&gt;
==== Windows ====&lt;br /&gt;
If you are running Windows, we recommend MobaXTerm as your file/ssh manager, this is because it is one relatively simple tool to do everything. MobaXTerm also automatically connects with X11 forwarding enabled.&lt;br /&gt;
==== Linux/OSX ====&lt;br /&gt;
Both Linux and OSX can connect in an X11 forwarding mode. Linux will have all of the tools you need installed already, OSX will need [https://www.xquartz.org/ XQuartz] installed.&lt;br /&gt;
&lt;br /&gt;
Then you will need to change your 'ssh' command slightly:&lt;br /&gt;
&lt;br /&gt;
 ssh -Y eid@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
The '''-Y''' argument tells ssh to setup X11 forwarding.&lt;br /&gt;
=== Starting an Graphical job ===&lt;br /&gt;
All graphical jobs, by design, must be interactive, so we'll use the srun command. On a headnode, we run the following:&lt;br /&gt;
 # load an X11 enabled application&lt;br /&gt;
 module load Octave&lt;br /&gt;
 # start an X11 job, sbatch arguments are accepted for srun as well, 1 node, 1 hour, 1 gb of memory&lt;br /&gt;
 srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 octave --gui&lt;br /&gt;
&lt;br /&gt;
Because these jobs are interactive, they may not be able to run at all times, depending on how busy the scheduler is at any point in time. '''--pty --x11''' are required arguments setting up the job, and '''octave --gui''' is the command to run inside the job.&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{{Scrolling table/top}}&lt;br /&gt;
{{Scrolling table/mid}}&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
{{Scrolling table/end}}&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{{Scrolling table/top}}&lt;br /&gt;
{{Scrolling table/mid}}&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
{{Scrolling table/end}}&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{{Scrolling table/top}}&lt;br /&gt;
{{Scrolling table/mid}}&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
{{Scrolling table/end}}&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=496</id>
		<title>Globus</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=496"/>
		<updated>2019-09-09T21:57:46Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added video link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Transferring Data using Globus ==&lt;br /&gt;
&lt;br /&gt;
[https://www.globus.org/ Globus] is a high-speed data transfer service. It is primarily used to transfer data between research institutions, but can also be used to transfer data between Beocat and a laptop or desktop. We suggest using Globus over other file transfer options if you are transferring large data sets. Globus also allows you to share data with those who do not have Beocat accounts.&lt;br /&gt;
&lt;br /&gt;
Beocat has two Globus servers - one on the main campus network, and one directly connected to [https://www.kanren.net/ KanREN] (essentially for our purposes, the university's Internet Service Provider). To understand which one you should be using an overview of how Beocat connects to the Internet is useful:&lt;br /&gt;
[[File:CampusNetworkOverview.png|thumb|left|Campus Network Overview - Click for a larger view]]&lt;br /&gt;
As you can see, if you are ON campus, it's faster to use the &amp;quot;DTN&amp;quot; endpoint, but if you are OFF campus, it is faster to use the &amp;quot;FIONA&amp;quot; endpoint. That being said, due to software differences, those two endpoints behave differently, and either CAN be used either on- or off-campus.&lt;br /&gt;
&lt;br /&gt;
== Video Demonstration ==&lt;br /&gt;
Rather than give dozens of screenshots, here is a video demonstrating how to use Globus to transfer files to and from Beocat&lt;br /&gt;
{{#evt:&lt;br /&gt;
service=youtube&lt;br /&gt;
|id=https://www.youtube.com/watch?v=D0X7x7B_wQs&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=494</id>
		<title>Globus</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=494"/>
		<updated>2019-08-23T21:51:15Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Finished using the on-campus DTN&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Transferring Data using Globus ==&lt;br /&gt;
&lt;br /&gt;
[https://www.globus.org/ Globus] is a high-speed data transfer service. It is primarily used to transfer data between research institutions, but can also be used to transfer data between Beocat and a laptop or desktop. We suggest using Globus over other file transfer options if you are transferring large data sets. Globus also allows you to share data with those who do not have Beocat accounts.&lt;br /&gt;
&lt;br /&gt;
Beocat has two Globus servers - one on the main campus network, and one directly connected to [https://www.kanren.net/ KanREN] (essentially for our purposes, the university's Internet Service Provider). To understand which one you should be using an overview of how Beocat connects to the Internet is useful:&lt;br /&gt;
[[File:CampusNetworkOverview.png|thumb|left|Campus Network Overview - Click for a larger view]]&lt;br /&gt;
As you can see, if you are ON campus, it's faster to use the &amp;quot;DTN&amp;quot; endpoint, but if you are OFF campus, it is faster to use the &amp;quot;FIONA&amp;quot; endpoint. That being said, due to software differences, those two endpoints behave differently, and either CAN be used either on- or off-campus.&lt;br /&gt;
&lt;br /&gt;
=== Using either endpoint ===&lt;br /&gt;
# Go to [https://www.globus.org https://www.globus.org] and click the &amp;quot;Log In&amp;quot; button in the upper-right corner.&lt;br /&gt;
# Under &amp;quot;Use your existing organizational login&amp;quot;, find &amp;quot;Kansas State University&amp;quot; and click &amp;quot;Continue&amp;quot;&lt;br /&gt;
# You will be redirected to a K-State page where you can login with your eID credentials&lt;br /&gt;
# You will then be logged into the Globus File Manager.&lt;br /&gt;
# Click on &amp;quot;Endpoints&amp;quot;&lt;br /&gt;
# In the top box, search for &amp;quot;Beocat&amp;quot; (this is case insensitive)&lt;br /&gt;
# This is where the two endpoints diverge in how they work, and where we pick up from below.&lt;br /&gt;
&lt;br /&gt;
=== Using the on-campus DTN endpoint ===&lt;br /&gt;
# Click on &amp;quot;Kansas State University Beocat&amp;quot;. You will see a screen that gives some information about the Endpoint (which you will most likely ignore).&lt;br /&gt;
# On the right side of the screen, click on &amp;quot;Open in File Manager&amp;quot;&lt;br /&gt;
## You will now see your files (your home directory) in Beocat in the left window. From here, you can perform some basic operations, such as renaming or deleting a file, but you cannot upload or download files. (NOTE: If you click the 3-lined icon in the middle of the page as circled in red in the following image, the descriptions of the icons will pop-out):  [[File:GlobusToolExpansion.png|thumb|Globus Tool Expansion]]&lt;br /&gt;
## On the right side of the screen, click where it says &amp;quot;Transfer or sync to&amp;quot;.&lt;br /&gt;
## Here you can select another research institution to which you have access, or you can click the link to &amp;quot;Install Globus Connect Personal&amp;quot;, which will install software on your own computer.&lt;br /&gt;
## Once you install Globus Connect Personal, you can use that as your other Endpoint.&lt;br /&gt;
# Now you have two systems to view. Beocat and another. You may drag-and-drop files or folders from one location to another, or use the other features to synchronize files between the two. Once initiated, file transfers via Globus are done in the background. You can close your browser window, or log out of your computer - as long as both endpoints are connected, file transfers will continue. (Note, obviously if you're running Globus Connect Personal on your laptop and you shut it off to take it with you, it is no longer connected. However, as long as the transfer hasn't timed out (24 hours?) it should resume the transfer when you turn it back on.&lt;br /&gt;
# You can change directories on either side by clicking on them, or typing the name of the directory (/bulk/username tends to be one frequently used on Beocat).&lt;br /&gt;
&lt;br /&gt;
==== Sharing data from the on-campus DTN endpoint ====&lt;br /&gt;
Now that you have logged into Beocat using Globus (above), you can share files or folders.&lt;br /&gt;
In my example here, I have a folder called &amp;quot;sharedfolder&amp;quot; in my home directory, which I want to make available to others. [[File:Globus Sharing.png|thumb|Screenshot of sharing my 'sharedfolder' directory]]&lt;br /&gt;
# Scroll down so you can see the &amp;quot;sharedfolder&amp;quot; directory, and click on it.&lt;br /&gt;
# Click the 'Share' button&lt;br /&gt;
# Click on 'Add a Shared Endpoint'&lt;br /&gt;
# Give any optional info that may make it easier for those with whom you are sharing to find. In my case, I entered 'Beocat Shared Folder Demo for my Display name and 'Shared Folder for Demonstration Purposes' for the description.&lt;br /&gt;
# Click 'Create Share'&lt;br /&gt;
# By default, only you have access to this share. Click the &amp;quot;Add Permissions - Share With&amp;quot; button to share with anybody you like.&lt;br /&gt;
## A user is typically designated by their email address.&lt;br /&gt;
## A group is a group of users you have defined. There is a link on the left side of the web app to create groups.&lt;br /&gt;
## All users means anybody who logs in with Globus. Note that if you use this option, anybody will be able to search for and use this collection&lt;br /&gt;
# The people you share with have the permission you give: read and/or write. I ''strongly'' suggest you do not give &amp;quot;all users&amp;quot; write access to your folder. In fact, be very careful of anybody to whom you give write permission. Remember, you are responsible for the content on your account.&lt;br /&gt;
# Complete the process by clicking 'Add Permission'&lt;br /&gt;
# Anybody who has permissions to your folder can now, once they login to Globus, search for the collection you just shared. The 'Sharing' tab for your collection (which is where you are after clicking 'Add Permission') has a &amp;quot;View link for Sharing&amp;quot;, which you can send to any collaborators so they can more easily find the data which you have shared.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=File:Globus_Sharing.png&amp;diff=493</id>
		<title>File:Globus Sharing.png</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=File:Globus_Sharing.png&amp;diff=493"/>
		<updated>2019-08-23T21:31:27Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A screenshot of preparing to share files in Globus&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=492</id>
		<title>Globus</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=492"/>
		<updated>2019-07-12T16:04:33Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Still writing initial page&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Transferring Data using Globus ==&lt;br /&gt;
&lt;br /&gt;
[https://www.globus.org/ Globus] is a high-speed data transfer service. It is primarily used to transfer data between research institutions, but can also be used to transfer data between Beocat and a laptop or desktop. We suggest using Globus over other file transfer options if you are transferring large data sets. Globus also allows you to share data with those who do not have Beocat accounts.&lt;br /&gt;
&lt;br /&gt;
Beocat has two Globus servers - one on the main campus network, and one directly connected to [https://www.kanren.net/ KanREN] (essentially for our purposes, the university's Internet Service Provider). To understand which one you should be using an overview of how Beocat connects to the Internet is useful:&lt;br /&gt;
[[File:CampusNetworkOverview.png|thumb|left|Campus Network Overview - Click for a larger view]]&lt;br /&gt;
As you can see, if you are ON campus, it's faster to use the &amp;quot;DTN&amp;quot; endpoint, but if you are OFF campus, it is faster to use the &amp;quot;FIONA&amp;quot; endpoint. That being said, due to software differences, those two endpoints behave differently, and either CAN be used either on- or off-campus.&lt;br /&gt;
&lt;br /&gt;
=== Using either endpoint ===&lt;br /&gt;
# Go to [https://www.globus.org https://www.globus.org] and click the &amp;quot;Log In&amp;quot; button in the upper-right corner.&lt;br /&gt;
# Under &amp;quot;Use your existing organizational login&amp;quot;, find &amp;quot;Kansas State University&amp;quot; and click &amp;quot;Continue&amp;quot;&lt;br /&gt;
# You will be redirected to a K-State page where you can login with your eID credentials&lt;br /&gt;
# You will then be logged into the Globus File Manager.&lt;br /&gt;
# Click on &amp;quot;Endpoints&amp;quot;&lt;br /&gt;
# In the top box, search for &amp;quot;Beocat&amp;quot; (this is case insensitive)&lt;br /&gt;
# This is where the two endpoints diverge in how they work, and where we pick up from below.&lt;br /&gt;
&lt;br /&gt;
=== Using the on-campus DTN endpoint ===&lt;br /&gt;
# Click on &amp;quot;Kansas State University Beocat&amp;quot;. You will see a screen that gives some information about the Endpoint (which you will most likely ignore).&lt;br /&gt;
# On the right side of the screen, click on &amp;quot;Open in File Manager&amp;quot;&lt;br /&gt;
## You will now see your files in Beocat in the left window. From here, you can perform some basic operations, such as renaming or deleting a file, but you cannot upload or download files. (NOTE: If you click the 3-lined icon in the middle of the page as circled in red in the following image, the descriptions of the icons will pop-out): &lt;br /&gt;
[[File:GlobusToolExpansion.png|thumb|Globus Tool Expansion]]&lt;br /&gt;
## On the right side of the screen, click where it says &amp;quot;Transfer or sync to&amp;quot;.&lt;br /&gt;
## Here you can select another research institution to which you have access, or you can click the link to &amp;quot;Install Globus Connect Personal&amp;quot;, which will install software on your own computer.&lt;br /&gt;
## Once you install Globus Connect Personal,&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=File:CampusNetworkOverview.png&amp;diff=490</id>
		<title>File:CampusNetworkOverview.png</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=File:CampusNetworkOverview.png&amp;diff=490"/>
		<updated>2019-07-12T15:05:45Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=489</id>
		<title>Globus</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Globus&amp;diff=489"/>
		<updated>2019-07-11T21:58:33Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Not ready for publishing, but leaving for the day.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Transferring Data using Globus ==&lt;br /&gt;
&lt;br /&gt;
[https://www.globus.org/ Globus] is a high-speed data transfer service. It is primarily used to transfer data between research institutions, but can also be used to transfer data between Beocat and a laptop or desktop. We suggest using Globus over other file transfer options if you are transferring large data sets. Globus also allows you to share data with those who do not have Beocat accounts.&lt;br /&gt;
&lt;br /&gt;
Beocat has two Globus servers - one on the main campus network, and one directly connected to [https://www.kanren.net/ KanREN] (essentially for our purposes, the university's Internet Service Provider). To understand which one you should be using an overview of how Beocat connects to the Internet is useful:&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=488</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=488"/>
		<updated>2019-07-11T20:51:17Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: /* What is Beocat? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research in Engineering and Science, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of CentOS Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''beocat#beocat''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H4&amp;gt;Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/]&amp;lt;BR&amp;gt;&lt;br /&gt;
Read about  [[Installed software]] and languages&amp;lt;BR&amp;gt;&lt;br /&gt;
Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]]&amp;lt;BR&amp;gt;&lt;br /&gt;
&amp;lt;/H4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Running Software on Beocat ==&lt;br /&gt;
Running software on Beocat involves submitting a small job script to the scheduler which will use the information in that job script to allocate the resources your job needs then start the code running.  Click on the links below to see examples of how to run applications written in some common languages used on high-performance computers.  The first link for OpenMPI also provides general information on loading modules and using &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; to submit and cancel jobs.&lt;br /&gt;
* Running an [[Installed software#OpenMPI|MPI job]]&lt;br /&gt;
* Running an [[Installed software#R|R job]]&lt;br /&gt;
* Running a [[Installed software#Python|Python job]]&lt;br /&gt;
* Running a [[Installed software#MatLab compiler|Matlab job]]&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [http://freenode.net/using_the_network.shtml freenode chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' and/or '''kylehutson''' in your message, and it should grab our attention. Available from a web browser [[Special:WebChat|here.]]&lt;br /&gt;
&lt;br /&gt;
For in person help, we offer a weekly open support session as mentioned in our calendar down below. Alternatively, we can often schedule a time to meet with you individually. You just need to send us an e-mail and provide us with the details we asked for above.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H4&amp;gt;&lt;br /&gt;
Again, when you email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu] please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/H4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
Here are some recent tweets:&lt;br /&gt;
&amp;lt;ShoogleTweet limit=&amp;quot;6&amp;quot;&amp;gt;KSUBeocat&amp;lt;/ShoogleTweet&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat.&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar&lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com&lt;br /&gt;
|color=711616&lt;br /&gt;
|view=AGENDA&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Policy&amp;diff=481</id>
		<title>Policy</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Policy&amp;diff=481"/>
		<updated>2019-05-21T18:08:01Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: /* Acknowledging Use of Beocat Resources and/or Personnel in Publications */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== K-State Information Technology Usage Policy ==&lt;br /&gt;
As Beocat is a K-State resource, the following usage policy also applies: http://www.k-state.edu/policies/ppm/3420.html&lt;br /&gt;
&lt;br /&gt;
Please pay close attention to .050-2, and .050-4, as these are egregious violations.&lt;br /&gt;
== Classified, PII/HIPAA, and/or export controlled data ==&lt;br /&gt;
Beocat is not equipped to handle [[wikipedia:Classified_information|Classified]], [[wikipedia:Personally_identifiable_information|PII]], [[wikipedia:Health_Insurance_Portability_and_Accountability_Act|HIPAA]] or [[wikipedia:Export_Administration_Regulations|export controlled]] data.&lt;br /&gt;
== CUI Data ==&lt;br /&gt;
For those that need to store and/or compute on [[wikipedia:Controlled Unclassified Information|CUI Data]], we may be able to work something out. You must speak with us before storing any such data on Beocat.&lt;br /&gt;
== Maintenance ==&lt;br /&gt;
Beocat reserves the right to a 24 hour maintenance period every other week. However, this maintenance is not always necessary. Maintenance intentions and reservations will always be announced on the mailing list 2 weeks before an actual maintenance period is in effect.&lt;br /&gt;
&lt;br /&gt;
== Head node computational tasks ==&lt;br /&gt;
The head node serves as a shell server and development environment for Beocat users. We wish to keep this machine running responsively to make work easier. We do not have a problem with running simple post-processing work on the head node directly. But, if your process seems too computation or memory intensive, it may have its priority severely reduced or may be killed completely. If in doubt, ask.&lt;br /&gt;
&lt;br /&gt;
Due to abuses of the head node, there are now strict limits in place. If a process uses more than 4GB of RSS memory or 6GB of virtual memory, it will get killed automatically. RSS Memory is limited to 12GB across all users. CPU Usage is allocated with a fair-share algorithm, all users have equivalent access to CPU time.&lt;br /&gt;
== Backups ==&lt;br /&gt;
For those of you using our hosted virtual machines, no backups of said machines or data are made.&lt;br /&gt;
&lt;br /&gt;
At this point in time, due to the size of our main storage, we are unable to provide backups of any data.&lt;br /&gt;
&lt;br /&gt;
== Home Directory Quota ==&lt;br /&gt;
Each home directory has a quota of 1TB. If you use more that 1TB in your home directory, we will notify you and provide a window for resolving the issue. If no action is underwent, we will move data elsewhere.&lt;br /&gt;
&lt;br /&gt;
== Bulk Usage ==&lt;br /&gt;
We have no quota for usage within our /bulk filesystem. To keep this from out of control growth, files that have not been read from or written to in 2 years will be automatically removed (unless prior arrangements are made with Beocat Staff).&lt;br /&gt;
&lt;br /&gt;
== Account deactivation ==&lt;br /&gt;
If your account meets any of the following criteria:&lt;br /&gt;
* inactive for 2 years&lt;br /&gt;
* invalid e-mail address on file&lt;br /&gt;
* unsubscribed from our mailing list&lt;br /&gt;
we will mark the account for archival, and remove your ability to login. If you should need the account again, please fill out our [https://account.beocat.cis.ksu.edu/user account request form.]&lt;br /&gt;
&lt;br /&gt;
== Acknowledging Use of Beocat Resources and/or Personnel in Publications ==&lt;br /&gt;
Click [[PapersAndGrants|here]] for a list of publications that used Beocat resources and/or personnel.&lt;br /&gt;
&lt;br /&gt;
# A publication that is based in whole or in part on computations performed using Beocat systems, including but not limited to hardware, storage, networking and/or software, should incorporate the following text into the Acknowledgements section of the publication:&lt;br /&gt;
#* [Some of] The computing for this project was performed on the Beocat Research Cluster at Kansas State University, which is funded in part by NSF grants CNS-1006860, EPS-1006860, EPS-0919443, ACI-1440548, CHE-1726332, and NIH P20GM113109.&lt;br /&gt;
# If any Beocat staff member(s) assisted with the work in any way, then for each Beocat staff member that was involved in the work:&lt;br /&gt;
## If the publication includes a substantial amount of text about the work that the Beocat staff member contributed to, and if the Beocat staff member did a substantial amount of development or optimization of software, and/or they contributed significantly to the writing of the publication, then that staff member should be included as a co-author on that publication, with author order to be negotiated among the authors. &lt;br /&gt;
##; NOTE : This requirement can be waived for tenure track (but not yet tenured) faculty if the faculty member has a compelling tenure-related interest in, for example, producing single-author publications.&lt;br /&gt;
## If the conditions above don't apply, then the Beocat staff member should be acknowledged by name and job title in the Acknowledgements section of the paper. &lt;br /&gt;
##; For example : Beocat Director Daniel Andresen and Beocat Systems Administrator Adam Tygart provided valuable technical expertise. &lt;br /&gt;
# A citation for your publication should be added to our [[PapersAndGrants|papers and grants page]].&lt;br /&gt;
&lt;br /&gt;
== IRB Statement ==&lt;br /&gt;
=== INFORMATION ===&lt;br /&gt;
If you are a Beocat user, whenever you submit a job, delete a job, or otherwise interact with the&lt;br /&gt;
scheduler, automatic information about this is logged and will be used in this study. This will include&lt;br /&gt;
information about the job including requested resources (memory, processors, duration, modules, etc.).&lt;br /&gt;
We may send you a followup request for more information if, for example, you delete a job. Your&lt;br /&gt;
participation is optional.&lt;br /&gt;
=== RISKS ===&lt;br /&gt;
There are no anticipated risks with participation in this study other than the time responding to a&lt;br /&gt;
followup information request.&lt;br /&gt;
=== BENEFITS ===&lt;br /&gt;
Your participation in our studies will help us learn how to optimize the performance of Beocat and other&lt;br /&gt;
HPC resources, which will help our users and our overall science and education efforts.&lt;br /&gt;
=== CONFIDENTIALITY ===&lt;br /&gt;
All information gathered in this study will be kept confidential. All information about your jobs will not&lt;br /&gt;
use real names or eIDs, and any publications will aggregate overall information.&lt;br /&gt;
=== CONTACT ===&lt;br /&gt;
If you have any questions at any time about the study or procedures, please contact Dr. Daniel Andresen&lt;br /&gt;
at Kansas State University, Department of Computer Science: Phone: (785) 532-7914 or Email:&lt;br /&gt;
dan@ksu.edu.&lt;br /&gt;
If you feel you have not been treated according to the description in this page, or your rights as a&lt;br /&gt;
participant in research have been violated during the course of this study, you may contact the office for&lt;br /&gt;
the Kansas State University Committee on Research Involving Human Subjects, 203 Fairchild Hall, Kansas&lt;br /&gt;
State University, Manhattan, KS 66506. (785) 532-3224.&lt;br /&gt;
=== PARTICIPATION ===&lt;br /&gt;
&lt;br /&gt;
Your participation in this study is strictly voluntary; you may refuse to participate in any followup&lt;br /&gt;
surveys or withdraw your information from the study without penalty. If you decide to participate, you&lt;br /&gt;
may withdraw from the study at any time without penalty. To remove your data from use in the study,&lt;br /&gt;
contact Dr. Daniel Andresen as described above.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=448</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=448"/>
		<updated>2019-02-26T00:05:28Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Fixed initialization steps for interactive example&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Drinking from the Firehose ==&lt;br /&gt;
For a complete list of all installed modules, see [[ModuleList]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; gcccuda:    GNU Compiler Collection (GCC) based compiler toolchain, along with CUDA toolkit.&lt;br /&gt;
; gmvapich2:    GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support.&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; gompic:    GNU Compiler Collection (GCC) based compiler toolchain along with CUDA toolkit, including OpenMPI for MPI support with CUDA features enabled.&lt;br /&gt;
; goolfc:    GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module spider OpenMPI':&lt;br /&gt;
&lt;br /&gt;
* OpenMPI/2.0.2-GCC-6.3.0-2.27&lt;br /&gt;
* OpenMPI/2.0.2-iccifort-2017.1.132-GCC-6.3.0-2.27&lt;br /&gt;
* OpenMPI/2.1.1-GCC-6.4.0-2.28&lt;br /&gt;
* OpenMPI/2.1.1-GCC-7.2.0-2.29&lt;br /&gt;
* OpenMPI/2.1.1-gcccuda-2017b&lt;br /&gt;
* OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
* OpenMPI/2.1.1-iccifort-2018.0.128-GCC-7.2.0-2.29&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
We currently provide (module -r spider '^R$'):&lt;br /&gt;
* R/3.4.0-foss-2017beocatb-X11-20170314&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell qsub how long we expect our work to take: 15 minutes (D-H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
We currently provide (module spider Java):&lt;br /&gt;
* Java/1.8.0_131&lt;br /&gt;
* Java/1.8.0_144&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
We currently provide (module spider Python)&lt;br /&gt;
* Python/2.7.13-foss-2017beocatb&lt;br /&gt;
* Python/2.7.13-GCCcore-7.2.0-bare&lt;br /&gt;
* Python/2.7.13-iomkl-2017a&lt;br /&gt;
* Python/2.7.13-iomkl-2017beocatb&lt;br /&gt;
* Python/3.6.3-foss-2017b&lt;br /&gt;
* Python/3.6.3-foss-2017beocatb&lt;br /&gt;
* Python/3.6.3-iomkl-2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you need modules that we do not have installed, you should use [https://virtualenv.pypa.io/en/stable/userguide/ virtualenv] to setup a virtual python environment in your home directory. This will let you install python modules as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python&lt;br /&gt;
module load Python/3.6.3-iomkl-2017beocatb&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that &amp;lt;code&amp;gt;virtualenv --help&amp;lt;/code&amp;gt; has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
virtualenv test&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/3.6.3-iomkl-2017beocatb&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&lt;br /&gt;
  srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&lt;br /&gt;
  mkdir spark-test&lt;br /&gt;
  cd spark-test&lt;br /&gt;
  cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare .&lt;br /&gt;
&lt;br /&gt;
The sample code requires 'nltk' and 'numpy' packages, so the first time you run it, you need to create the virtualenv and install these packages.&lt;br /&gt;
&lt;br /&gt;
  module load Python&lt;br /&gt;
  mkdir ~/virtualenvs&lt;br /&gt;
  cd ~/virtualenvs&lt;br /&gt;
  virtualenv spark-test&lt;br /&gt;
  source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
  pip install nltk&lt;br /&gt;
  pip install numpy&lt;br /&gt;
&lt;br /&gt;
On any subsequent runs, you can then just enter that virtualenv without running all of the above commands:&lt;br /&gt;
&lt;br /&gt;
  module load Python&lt;br /&gt;
  source ~/virtualenvs/spark-test/bin/activate&lt;br /&gt;
 &lt;br /&gt;
Then load the Spark module (Python should already be loaded from above), change to the sample directory, fire up pyspark, and run the sample code.&lt;br /&gt;
&lt;br /&gt;
  module load Spark&lt;br /&gt;
  cd ~/spark-test/Shakespeare&lt;br /&gt;
  pyspark&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=shakespeare&lt;br /&gt;
  #SBATCH --mem=10G&lt;br /&gt;
  #SBATCH --time=01:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=1&lt;br /&gt;
  &lt;br /&gt;
  # Load Spark and Python (version 3 here)&lt;br /&gt;
  module load Spark&lt;br /&gt;
  module load Python&lt;br /&gt;
  &lt;br /&gt;
  spark-submit shakespeare.py&lt;br /&gt;
&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
  # If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
  try:&lt;br /&gt;
     sc&lt;br /&gt;
  except NameError:&lt;br /&gt;
     from pyspark import SparkConf, SparkContext&lt;br /&gt;
     conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
     sc = SparkContext(conf = conf)&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
If you need a newer version (or threads), just load one we provide in our modules (module spider Perl):&lt;br /&gt;
* Perl/5.26.0-foss-2017beocatb&lt;br /&gt;
* Perl/5.26.0-iompi-2017beocatb&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell qsub how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module purge&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single-user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Sumbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single-user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module purge&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single-user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=447</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=447"/>
		<updated>2019-02-25T23:37:11Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Typos and prereqs for spark&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Drinking from the Firehose ==&lt;br /&gt;
For a complete list of all installed modules, see [[ModuleList]]&lt;br /&gt;
&lt;br /&gt;
== Toolchains ==&lt;br /&gt;
A toolchain is a set of compilers, libraries and applications that are needed to build software. Some software functions better when using specific toolchains.&lt;br /&gt;
&lt;br /&gt;
We provide a good number of toolchains and versions of toolchains make sure your applications will compile and/or run correctly.&lt;br /&gt;
&lt;br /&gt;
These toolchains include (you can run 'module keyword toolchain'):&lt;br /&gt;
; foss:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; gcccuda:    GNU Compiler Collection (GCC) based compiler toolchain, along with CUDA toolkit.&lt;br /&gt;
; gmvapich2:    GNU Compiler Collection (GCC) based compiler toolchain, including MVAPICH2 for MPI support.&lt;br /&gt;
; gompi:    GNU Compiler Collection (GCC) based compiler toolchain, including OpenMPI for MPI support.&lt;br /&gt;
; gompic:    GNU Compiler Collection (GCC) based compiler toolchain along with CUDA toolkit, including OpenMPI for MPI support with CUDA features enabled.&lt;br /&gt;
; goolfc:    GCC based compiler toolchain __with CUDA support__, and including OpenMPI for MPI support, OpenBLAS (BLAS and LAPACK support), FFTW and ScaLAPACK.&lt;br /&gt;
; iomkl:    Intel Cluster Toolchain Compiler Edition provides Intel C/C++ and Fortran compilers, Intel MKL &amp;amp; OpenMPI.&lt;br /&gt;
&lt;br /&gt;
You can run 'module spider $toolchain' to see the versions we have:&lt;br /&gt;
 $ module spider iomkl&lt;br /&gt;
* iomkl/2017a&lt;br /&gt;
* iomkl/2017b&lt;br /&gt;
* iomkl/2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you load one of those (module load iomkl/2017b), you can see the other modules and versions of software that it loaded with the 'module list':&lt;br /&gt;
 $ module list&lt;br /&gt;
 Currently Loaded Modules:&lt;br /&gt;
   1) icc/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   2) binutils/2.28-GCCcore-6.4.0&lt;br /&gt;
   3) ifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   4) iccifort/2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   5) GCCcore/6.4.0&lt;br /&gt;
   6) numactl/2.0.11-GCCcore-6.4.0&lt;br /&gt;
   7) hwloc/1.11.7-GCCcore-6.4.0&lt;br /&gt;
   8) OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
   9) iompi/2017b&lt;br /&gt;
  10) imkl/2017.3.196-iompi-2017b&lt;br /&gt;
  11) iomkl/2017b&lt;br /&gt;
&lt;br /&gt;
As you can see, toolchains can depend on each other. For instance, the iomkl toolchain, depends on iompi, which depends on iccifort, which depend on icc and ifort, which depend on GCCcore which depend on GCC. Hence it is very important that the correct versions of all related software are loaded.&lt;br /&gt;
&lt;br /&gt;
With software we provide, the toolchain used to compile is always specified in the &amp;quot;version&amp;quot; of the software that you want to load.&lt;br /&gt;
&lt;br /&gt;
If you mix toolchains, inconsistent things may happen.&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
We provide lots of versions, you are most likely better off directly loading a toolchain or application to make sure you get the right version, but you can see the versions we have with 'module spider OpenMPI':&lt;br /&gt;
&lt;br /&gt;
* OpenMPI/2.0.2-GCC-6.3.0-2.27&lt;br /&gt;
* OpenMPI/2.0.2-iccifort-2017.1.132-GCC-6.3.0-2.27&lt;br /&gt;
* OpenMPI/2.1.1-GCC-6.4.0-2.28&lt;br /&gt;
* OpenMPI/2.1.1-GCC-7.2.0-2.29&lt;br /&gt;
* OpenMPI/2.1.1-gcccuda-2017b&lt;br /&gt;
* OpenMPI/2.1.1-iccifort-2017.4.196-GCC-6.4.0-2.28&lt;br /&gt;
* OpenMPI/2.1.1-iccifort-2018.0.128-GCC-7.2.0-2.29&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
We currently provide (module -r spider '^R$'):&lt;br /&gt;
* R/3.4.0-foss-2017beocatb-X11-20170314&lt;br /&gt;
&lt;br /&gt;
==== Packages ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own R Packages ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load R&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;R&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;sbatch myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.sbatch&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell qsub how long we expect our work to take: 15 minutes (D-H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
&lt;br /&gt;
# Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
module load R&lt;br /&gt;
R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sbatch submit-R.sbatch&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
We currently provide (module spider Java):&lt;br /&gt;
* Java/1.8.0_131&lt;br /&gt;
* Java/1.8.0_144&lt;br /&gt;
&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
We currently provide (module spider Python)&lt;br /&gt;
* Python/2.7.13-foss-2017beocatb&lt;br /&gt;
* Python/2.7.13-GCCcore-7.2.0-bare&lt;br /&gt;
* Python/2.7.13-iomkl-2017a&lt;br /&gt;
* Python/2.7.13-iomkl-2017beocatb&lt;br /&gt;
* Python/3.6.3-foss-2017b&lt;br /&gt;
* Python/3.6.3-foss-2017beocatb&lt;br /&gt;
* Python/3.6.3-iomkl-2017beocatb&lt;br /&gt;
&lt;br /&gt;
If you need modules that we do not have installed, you should use [https://virtualenv.pypa.io/en/stable/userguide/ virtualenv] to setup a virtual python environment in your home directory. This will let you install python modules as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Load Python&lt;br /&gt;
module load Python/3.6.3-iomkl-2017beocatb&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command Python is loaded.  After you logoff and then logon again Python will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* Create a location for your virtual environments (optional, but helps keep things organized)&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ~/virtualenvs&lt;br /&gt;
cd ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test'. Note that &amp;lt;code&amp;gt;virtualenv --help&amp;lt;/code&amp;gt; has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
virtualenv test&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments (the virtual environment name should be in the output):&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
ls ~/virtualenvs&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
(After running this command your virtual environment is activated.  After you logoff and then logon again your virtual environment will not be loaded so you must rerun this command every time you logon.)&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment test&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
module load Python/3.6.3-iomkl-2017beocatb&lt;br /&gt;
source ~/virtualenvs/test/bin/activate&lt;br /&gt;
export PYTHONDONTWRITEBYTECODE=1&lt;br /&gt;
python ~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://spark.apache.org/ Spark] ===&lt;br /&gt;
&lt;br /&gt;
Spark is a programming language for large scale data processing.&lt;br /&gt;
It can be used in conjunction with Python, R, Scala, Java, and SQL.&lt;br /&gt;
Spark can be run on Beocat interactively or through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
To run interactively, you must first request a node or nodes from the Slurm queue.&lt;br /&gt;
The line below requests 1 node and 1 core for 24 hours and if available will drop&lt;br /&gt;
you into the bash shell on that node.&lt;br /&gt;
&lt;br /&gt;
  srun -J srun -N 1 -n 1 -t 24:00:00 --mem=10G --pty bash&lt;br /&gt;
&lt;br /&gt;
We have some sample python based Spark code you can try out that came from the &lt;br /&gt;
exercises and homework from the PSC Spark workshop.  &lt;br /&gt;
&lt;br /&gt;
  mkdir spark-test&lt;br /&gt;
  cd spark-test&lt;br /&gt;
  cp -rp /homes/daveturner/projects/PSC-BigData-Workshop/Shakespeare .&lt;br /&gt;
&lt;br /&gt;
Then load the Spark and Python modules, change to the sample directory, fire up pyspark, and run the sample code.&lt;br /&gt;
&lt;br /&gt;
  module load Spark Python&lt;br /&gt;
  cd Shakespeare&lt;br /&gt;
  pyspark&lt;br /&gt;
  &amp;gt;&amp;gt;&amp;gt; exec(open(&amp;quot;shakespeare.py&amp;quot;).read())&lt;br /&gt;
&lt;br /&gt;
You can work interactively from the pyspark prompt (&amp;gt;&amp;gt;&amp;gt;) in addition to running scripts as above.&lt;br /&gt;
&lt;br /&gt;
The Shakespeare directory also contains a sample sbatch submit script that will run the &lt;br /&gt;
same shakespeare.py code through the Slurm batch queue.  &lt;br /&gt;
&lt;br /&gt;
  #!/bin/bash -l&lt;br /&gt;
  #SBATCH --job-name=shakespeare&lt;br /&gt;
  #SBATCH --mem=10G&lt;br /&gt;
  #SBATCH --time=01:00:00&lt;br /&gt;
  #SBATCH --nodes=1&lt;br /&gt;
  #SBATCH --ntasks-per-node=1&lt;br /&gt;
  &lt;br /&gt;
  # Load Spark and Python (version 3 here)&lt;br /&gt;
  module load Spark&lt;br /&gt;
  module load Python&lt;br /&gt;
  &lt;br /&gt;
  spark-submit shakespeare.py&lt;br /&gt;
&lt;br /&gt;
When you run interactively, pyspark initializes your spark context &amp;lt;B&amp;gt;sc&amp;lt;/B&amp;gt;.&lt;br /&gt;
You will need to do this manually as in the sample python code when you want&lt;br /&gt;
to submit jobs through the Slurm queue.&lt;br /&gt;
&lt;br /&gt;
  # If there is no Spark Context (not running interactive from pyspark), create it&lt;br /&gt;
  try:&lt;br /&gt;
     sc&lt;br /&gt;
  except NameError:&lt;br /&gt;
     from pyspark import SparkConf, SparkContext&lt;br /&gt;
     conf = SparkConf().setMaster(&amp;quot;local&amp;quot;).setAppName(&amp;quot;App&amp;quot;)&lt;br /&gt;
     sc = SparkContext(conf = conf)&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
If you need a newer version (or threads), just load one we provide in our modules (module spider Perl):&lt;br /&gt;
* Perl/5.26.0-foss-2017beocatb&lt;br /&gt;
* Perl/5.26.0-iompi-2017beocatb&lt;br /&gt;
&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;sbatch myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSlurm#Running_from_a_sbatch_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --mem-per-cpu=1G&lt;br /&gt;
# Now we tell qsub how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#SBATCH --time=0-0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
module load Perl&lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Octave for MatLab codes ===&lt;br /&gt;
&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
The 64-bit version of Octave can be loaded using the command above.  Octave can then be used&lt;br /&gt;
to work with MatLab codes on the head node and to submit jobs to the compute nodes through the&lt;br /&gt;
sbatch scheduler.  Octave is made to run MatLab code, but it does have limitations and does not support&lt;br /&gt;
everything that MatLab itself does.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=octave&lt;br /&gt;
#SBATCH --output=octave.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module purge&lt;br /&gt;
module load Octave/4.2.1-foss-2017beocatb-enable64&lt;br /&gt;
&lt;br /&gt;
octave &amp;lt; matlab_code.m&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== MatLab compiler ===&lt;br /&gt;
&lt;br /&gt;
Beocat also has a &amp;lt;B&amp;gt;single-user license&amp;lt;/B&amp;gt; for the MatLab compiler and the most common toolboxes&lt;br /&gt;
including the Parallel Computing Toolbox, Optimization Toolbox, Statistics and Machine Learning Toolbox,&lt;br /&gt;
Image Processing Toolbox, Curve Fitting Toolbox, Neural Network Toolbox, Sumbolic Math Toolbox, &lt;br /&gt;
Global Optimization Toolbox, and the Bioinformatics Toolbox.&lt;br /&gt;
&lt;br /&gt;
Since we only have a &amp;lt;B&amp;gt;single-user license&amp;lt;/B&amp;gt;, this means that you will be expected to develop your MatLab code&lt;br /&gt;
with Octave or elsewhere on a laptop or departmental server.  Once you're ready to do large runs, then you&lt;br /&gt;
move your code to Beocat, compile the MatLab code into an executable, and you can submit as many jobs as&lt;br /&gt;
you want to the scheduler.  To use the MatLab compiler, you need to load the MATLAB module to compile code and&lt;br /&gt;
load the mcr module to run the resulting MatLab executable.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have addpath() commands in your code, you will need to wrap them in an &amp;quot;if ~deployed&amp;quot; block and tell the&lt;br /&gt;
compiler to include that path via the -I flag.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;MATLAB&amp;quot;&amp;gt;&lt;br /&gt;
% wrap addpath() calls like so:&lt;br /&gt;
if ~deployed&lt;br /&gt;
    addpath('./another/folder/with/code/')&lt;br /&gt;
end&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
NOTE:  The license manager checks the mcc compiler out for a minimum of 30 minutes, so if another user compiles a code&lt;br /&gt;
you unfortunately may need to wait for up to 30 minutes to compile your own code.&lt;br /&gt;
&lt;br /&gt;
Compiling with additional paths:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
module load MATLAB&lt;br /&gt;
mcc -m matlab_main_code.m -I ./another/folder/with/code/ -o matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Any directories added with addpath() will need to be added to the list of compile options as -I arguments.  You&lt;br /&gt;
can have multiple -I arguments in your compile command.&lt;br /&gt;
&lt;br /&gt;
Here is an example job submission script.  Modify time, memory, tasks-per-node, and job name as you see fit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=matlab&lt;br /&gt;
#SBATCH --output=matlab.o%j&lt;br /&gt;
#SBATCH --time=1:00:00&lt;br /&gt;
#SBATCH --mem=4G&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=1&lt;br /&gt;
&lt;br /&gt;
module purge&lt;br /&gt;
module load mcr&lt;br /&gt;
&lt;br /&gt;
./matlab_executable_name&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For those who make use of mex files - compiled C and C++ code with matlab bindings - you will need to add these&lt;br /&gt;
files to the compiled archive via the -a flag.  See the behavior of this flag in the [https://www.mathworks.com/help/compiler/mcc.html compiler documentation].  You can either target specific .mex files or entire directories.&lt;br /&gt;
&lt;br /&gt;
Because codes often require adding several directories to the Matlab path as well as mex files from several locations,&lt;br /&gt;
we recommend writing a script to preserve and help document the steps to compile your Matlab code.  Here is an&lt;br /&gt;
abbreviated example from a current user:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
&lt;br /&gt;
module load MATLAB&lt;br /&gt;
&lt;br /&gt;
cd matlabPyrTools/MEX/&lt;br /&gt;
&lt;br /&gt;
# compile mex files&lt;br /&gt;
mex upConv.c convolve.c wrap.c edges.c&lt;br /&gt;
mex corrDn.c convolve.c wrap.c edges.c&lt;br /&gt;
mex histo.c&lt;br /&gt;
mex innerProd.c&lt;br /&gt;
&lt;br /&gt;
cd ../..&lt;br /&gt;
&lt;br /&gt;
mcc -m mongrel_creation.m \&lt;br /&gt;
  -I ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -I ./matlabPyrTools/ \&lt;br /&gt;
  -I ./FastICA/ \&lt;br /&gt;
  -a ./matlabPyrTools/MEX/ \&lt;br /&gt;
  -a ./texturesynth/ \&lt;br /&gt;
  -o mongrel_creation_binary&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, we only have a &amp;lt;B&amp;gt;single-user license&amp;lt;/B&amp;gt; for MatLab so the model is to develop and debug your MatLab code&lt;br /&gt;
elsewhere or using Octave on Beocat, then you can compile the MatLab code into an executable and run it without&lt;br /&gt;
limits on Beocat.  &lt;br /&gt;
&lt;br /&gt;
For more info on the mcc compiler see:  https://www.mathworks.com/help/compiler/mcc.html&lt;br /&gt;
&lt;br /&gt;
=== COMSOL ===&lt;br /&gt;
Beocat has no license for COMSOL. If you want to use it, you must provide your own.&lt;br /&gt;
&lt;br /&gt;
 module spider COMSOL&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
  COMSOL: COMSOL/5.3&lt;br /&gt;
 ----------------------------------------------------------------------------&lt;br /&gt;
    Description:&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling&lt;br /&gt;
      and simulating scientific and engineering problems&lt;br /&gt;
 &lt;br /&gt;
    This module can be loaded directly: module load COMSOL/5.3&lt;br /&gt;
 &lt;br /&gt;
    Help:&lt;br /&gt;
      &lt;br /&gt;
      Description&lt;br /&gt;
      ===========&lt;br /&gt;
      COMSOL Multiphysics software, an interactive environment for modeling and &lt;br /&gt;
 simulating scientific and engineering problems&lt;br /&gt;
      You must provide your own license.&lt;br /&gt;
      export LM_LICENSE_FILE=/the/path/to/your/license/file&lt;br /&gt;
      *OR*&lt;br /&gt;
      export LM_LICENSE_FILE=$LICENSE_SERVER_PORT@$LICENSE_SERVER_HOSTNAME&lt;br /&gt;
      e.g. export LM_LICENSE_FILE=1719@some.flexlm.server.ksu.edu&lt;br /&gt;
      &lt;br /&gt;
      More information&lt;br /&gt;
      ================&lt;br /&gt;
       - Homepage: https://www.comsol.com/&lt;br /&gt;
==== Graphical COMSOL ====&lt;br /&gt;
Running COMSOL in graphical mode on a cluster is generally a bad idea. If you choose to run it in graphical mode on a compute node, you will need to do something like the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Connect to the cluster with X11 forwarding (ssh -Y or mobaxterm)&lt;br /&gt;
# load the comsol module on the headnode&lt;br /&gt;
module load COMSOL&lt;br /&gt;
# export your comsol license as mentioned above, and tell the scheduler to run the software&lt;br /&gt;
srun --nodes=1 --time=1:00:00 --mem=1G --pty --x11 comsol -3drend sw&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .NET Core ===&lt;br /&gt;
==== Load .NET ====&lt;br /&gt;
 mozes@[eunomia] ~ $ module load dotNET-Core-SDK&lt;br /&gt;
==== create an application ====&lt;br /&gt;
Following instructions from [https://docs.microsoft.com/en-us/dotnet/core/tutorials/using-with-xplat-cli here], we'll create a simple 'Hello World' application&lt;br /&gt;
 mozes@[eunomia] ~ $ mkdir Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~ $ cd Hello&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ export DOTNET_SKIP_FIRST_TIME_EXPERIENCE=true&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet new console&lt;br /&gt;
 The template &amp;quot;Console Application&amp;quot; was created successfully.&lt;br /&gt;
 &lt;br /&gt;
 Processing post-creation actions...&lt;br /&gt;
 Running 'dotnet restore' on /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Restoring packages for /homes/mozes/Hello/Hello.csproj...&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.props.&lt;br /&gt;
  Generating MSBuild file /homes/mozes/Hello/obj/Hello.csproj.nuget.g.targets.&lt;br /&gt;
  Restore completed in 358.43 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
 &lt;br /&gt;
 Restore succeeded.&lt;br /&gt;
&lt;br /&gt;
==== Edit your program ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ vi Program.cs&lt;br /&gt;
==== Run your .NET application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet run&lt;br /&gt;
 Hello World!&lt;br /&gt;
==== Build and run the built application ====&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet build&lt;br /&gt;
 Microsoft (R) Build Engine version 15.8.169+g1ccb72aefa for .NET Core&lt;br /&gt;
 Copyright (C) Microsoft Corporation. All rights reserved.&lt;br /&gt;
 &lt;br /&gt;
  Restore completed in 106.12 ms for /homes/mozes/Hello/Hello.csproj.&lt;br /&gt;
  Hello -&amp;gt; /homes/mozes/Hello/bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 &lt;br /&gt;
 Build succeeded.&lt;br /&gt;
    0 Warning(s)&lt;br /&gt;
    0 Error(s)&lt;br /&gt;
 &lt;br /&gt;
 Time Elapsed 00:00:02.86&lt;br /&gt;
&lt;br /&gt;
 mozes@[eunomia] ~/Hello $ dotnet bin/Debug/netcoreapp2.1/Hello.dll&lt;br /&gt;
 Hello World!&lt;br /&gt;
&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=387</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=387"/>
		<updated>2018-08-15T21:44:31Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Add Twitter&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|High-Performance Computing (HPC)]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of CentOS Linux servers coordinated by the [https://slurm.schedmd.com/ Slurm] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A comparatively small [[Hadoop]] cluster&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''beocat#beocat''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use Slurm for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SlurmBasics]] page for an introduction on how to submit your first job. If you are already familiar with Slurm, we also have an [[AdvancedSlurm]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H4&amp;gt;Get an account at  [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/]&amp;lt;BR&amp;gt;&lt;br /&gt;
Read about  [[Installed software]] and languages&amp;lt;BR&amp;gt;&lt;br /&gt;
Learn about Slurm at [[SlurmBasics]] and [[AdvancedSlurm]]&amp;lt;BR&amp;gt;&lt;br /&gt;
&amp;lt;/H4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [http://freenode.net/using_the_network.shtml freenode chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' and/or '''kylehutson''' in your message, and it should grab our attention. Available from a web browser [[Special:WebChat|here.]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H4&amp;gt;&lt;br /&gt;
Again, when you email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu] please give us the job ID number, the path and script name for the job, and a full description of the problem.  It may also be useful to include the output to 'module list'.&lt;br /&gt;
&amp;lt;/H4&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Twitter ==&lt;br /&gt;
We now have [https://twitter.com/KSUBeocat Twitter]. Follow us to find out the latest from Beocat, or tweet to us to find answers to quick questions. This won't replace the mailing list for major announcements, but will be used for more minor notices.&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat.&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar&lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com&lt;br /&gt;
|color=711616&lt;br /&gt;
|view=AGENDA&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=SlurmBasics&amp;diff=381</id>
		<title>SlurmBasics</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=SlurmBasics&amp;diff=381"/>
		<updated>2018-05-10T18:36:39Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Fixed a typo in example&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== The CentOS/Slurm nodes ==&lt;br /&gt;
&lt;br /&gt;
We have converted Beocat from Gentoo Linux to CentOS Linux on December 26th of 2017.  Any applications or libraries from the old system must be recompiled.  We also converted Beocat to use the Slurm scheduler instead of SGE.  You will therefore also need to convert all your old qsub scripts over to sbatch scripts.  We have developed tools to make this process as easy as possible.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;H3&amp;gt;Using Modules&amp;lt;/H3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you're using a common code that others may also be using, we may already have it compiled in a module.  You can list the modules available and load an application as in the example below for Vasp.&lt;br /&gt;
&lt;br /&gt;
eos&amp;gt;  &amp;lt;B&amp;gt;module avail&amp;lt;/B&amp;gt;&amp;lt;BR&amp;gt;&lt;br /&gt;
eos&amp;gt;  &amp;lt;B&amp;gt;module load VASP&amp;lt;/B&amp;gt;&amp;lt;BR&amp;gt;&lt;br /&gt;
eos&amp;gt;  &amp;lt;B&amp;gt;module list&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When a module gets loaded, all the necessary libraries are also loaded and the paths to the libraries and executables are automatically set up.  Loading Vasp for example also loads the OpenMPI library needed to run it and adds the path to the MPI commands and Vasp executables.   To see how the path is set up, try executing &amp;lt;B&amp;gt;&amp;lt;I&amp;gt;which vasp_std&amp;lt;/I&amp;gt;&amp;lt;/B&amp;gt;.  The module system allows you to easily switch between different version of applications, libraries, or languages as well.&lt;br /&gt;
&lt;br /&gt;
If you are using a custom code or one that is not installed in a module, you'll need to recompile it yourself.  This process is easier under CentOS as some of the work just involves loading the necessary set of modules.  The first step is to decide whether to use the Intel compiler toolchain or the GNU toolchain, each of which includes the compilers and other math libraries.  The module commands for each are below, and you can load these automatically when you log in by adding one of these module load statements to your .bashrc file.  See &amp;lt;B&amp;gt;/homes/daveturner/.bashrc&amp;lt;/B&amp;gt; as an example, where I put the module load statements .&lt;br /&gt;
&lt;br /&gt;
To load the Intel compiler tool chain including the Intel Math Kernel Library:&amp;lt;BR&amp;gt;&lt;br /&gt;
eos&amp;gt;  &amp;lt;B&amp;gt;module load iomkl&amp;lt;/B&amp;gt;&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To load the GNU compiler tool chain including OpenBLAS, FFTW, and ScalaPack load foss (free open source software):&amp;lt;BR&amp;gt;&lt;br /&gt;
eos&amp;gt;  &amp;lt;B&amp;gt;module load foss&amp;lt;/B&amp;gt;&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Modules provide an easy way to set up the compilers and libraries you may need to compile your code.  Beyond that there are many different ways to compile codes so you'll just need to follow the directions.  If you need help you can always email us at &amp;lt;B&amp;gt;beocat@cs.ksu.edu&amp;lt;/B&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H3&amp;gt;Converting your qsub script for sbatch using &amp;lt;I&amp;gt;kstat.convert&amp;lt;/I&amp;gt;&amp;lt;/H3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you already have a qsub script, I have created a new perl program called kstat.convert that will automatically convert your qsub script over to an sbatch script.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;kstat.convert --sge qsub_script.sh --slurm slurm_script.sh&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below is an example of a simple qsub script and the resulting sbatch script after conversion.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#$ -j y&lt;br /&gt;
#$ -cwd&lt;br /&gt;
#$ -N netpipe&lt;br /&gt;
#$ -P KSU-CIS-HPC&lt;br /&gt;
&lt;br /&gt;
#$ -l mem=4G&lt;br /&gt;
#$ -l h_rt=100:00:00&lt;br /&gt;
#$ -pe single 32&lt;br /&gt;
&lt;br /&gt;
#$ -M daveturner@ksu.edu&lt;br /&gt;
#$ -m ab&lt;br /&gt;
&lt;br /&gt;
mpirun -np $NSLOTS NPmpi -o np.out&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash -l&lt;br /&gt;
#SBATCH --job-name=netpipe&lt;br /&gt;
&lt;br /&gt;
#SBATCH --mem-per-cpu=4G   # Memory per core, use --mem= for memory per node&lt;br /&gt;
#SBATCH --time=4-04:00:00   # Use the form DD-HH:MM:SS&lt;br /&gt;
#SBATCH --nodes=1&lt;br /&gt;
#SBATCH --ntasks-per-node=32&lt;br /&gt;
&lt;br /&gt;
#SBATCH --mail-user=daveturner@ksu.edu&lt;br /&gt;
#SBATCH --mail-type=ALL   # same as =BEGIN,FAIL,END&lt;br /&gt;
&lt;br /&gt;
mpirun -np $SLURM_NPROCS NPmpi -o np.out&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The sbatch file uses &amp;lt;B&amp;gt;#SBATCH&amp;lt;/B&amp;gt; to identify command options for the scheduler where the qsub file uses &amp;lt;B&amp;gt;#$&amp;lt;/B&amp;gt;.  Most options are similar but simply use a different syntax.  The memory can still be defined on a per core basis as with SGE, or you can use &amp;lt;B&amp;gt;--mem=128G&amp;lt;/B&amp;gt; to specify the total memory per node if you'd prefer.  The &amp;lt;B&amp;gt;--nodes=&amp;lt;/B&amp;gt; and &amp;lt;B&amp;gt;--ntasks-per-node=&amp;lt;/B&amp;gt; provide an easy way to request the core configuration you want.  If your code can be distributed across multiple nodes and you don't care what the arrangement is, you can instead just specify the number of cores using &amp;lt;B&amp;gt;--ntasks=&amp;lt;/B&amp;gt;.  For more in depth documentation on converting from SGE to Slurm follow the links below:&lt;br /&gt;
&lt;br /&gt;
https://srcc.stanford.edu/sge-slurm-conversion&amp;lt;BR&amp;gt;&lt;br /&gt;
https://slurm.schedmd.com/sbatch.html&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H3&amp;gt;Submitting jobs to Slurm&amp;lt;/H3&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once your qsub script has been converted to an sbatch script and you have an application compiled for CentOS, you can submit the job using the &amp;lt;B&amp;gt;sbatch&amp;lt;/B&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
eos&amp;gt; &amp;lt;B&amp;gt;sbatch sbatch_script.sh&amp;lt;/B&amp;gt;&amp;lt;BR&amp;gt;&lt;br /&gt;
eos&amp;gt; &amp;lt;B&amp;gt;kstat  --me&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This will submit the script and show you a list of your jobs that are running and the jobs you have in the queue.  By default the output for each job will go into a &amp;lt;B&amp;gt;slurm-###.out&amp;lt;/B&amp;gt; file where ### is the job ID number.  If you need to kill a job, you can use the &amp;lt;B&amp;gt;scancel&amp;lt;/B&amp;gt; command with the job ID number.&lt;br /&gt;
&lt;br /&gt;
== Submitting your first job ==&lt;br /&gt;
To submit a job to run under Slurm, we use the &amp;lt;B&amp;gt;&amp;lt;I&amp;gt;sbatch&amp;lt;/I&amp;gt;&amp;lt;/B&amp;gt; (submit batch) command.  The scheduler finds the optimum place for your job to run. With over 300 nodes and 7500 cores to schedule, as well as differing priorities, hardware, and individual resources, the scheduler's job is not trivial and it can take some time for a job to start even when there are empty nodes available.&lt;br /&gt;
&lt;br /&gt;
There are a few things you'll need to know before running sbatch.&lt;br /&gt;
* How many cores you need. Note that unless your program is created to use multiple cores (called &amp;quot;threading&amp;quot;), asking for more cores will not speed up your job. This is a common misperception. '''Beocat will not magically make your program use multiple cores!''' For this reason the default is 1 core.&lt;br /&gt;
* How much time you need. Many users when beginning to use Beocat neglect to specify a time requirement. The default is one hour, and we get asked why their job died after one hour. We usually point them to the [[FAQ]].&lt;br /&gt;
* How much memory you need. The default is 1 GB. If your job uses significantly more than you ask, your job will be killed off.&lt;br /&gt;
* Any advanced options. See the [[AdvancedSlurm]] page for these requests. For our basic examples here, we will ignore these.&lt;br /&gt;
&lt;br /&gt;
So let's now create a small script to test our ability to submit jobs. Create the following file (either by copying it to Beocat or by editing a text file and we'll name it &amp;lt;code&amp;gt;myhost.sh&amp;lt;/code&amp;gt;. Both of these methods are documented on our [[LinuxBasics]] page.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/sh&lt;br /&gt;
srun hostname&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Be sure to make it executable&lt;br /&gt;
 chmod u+x myhost.sh&lt;br /&gt;
&lt;br /&gt;
So, now lets submit it as a job and see what happens. Here I'm going to use five options&lt;br /&gt;
* &amp;lt;code&amp;gt;--mem-per-cpu=&amp;lt;/code&amp;gt; tells how much memory I need. In my example, I'm using our system minimum of 512 MB, which is more than enough. Note that your memory request is '''per core''', which doesn't make much difference for this example, but will as you submit more complex jobs.&lt;br /&gt;
* &amp;lt;code&amp;gt;--time=&amp;lt;/code&amp;gt; tells how much runtime I need. This can be in the form of &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. This is a very short job, so 1 minute should be plenty. This can't be changed after the job is started please make sure you have requested a sufficient amount of time.&lt;br /&gt;
* &amp;lt;code&amp;gt;--cpus-per-task=1&amp;lt;/code&amp;gt; tells Slurm that I need only a single core per task. The [[AdvancedSlurm]] page has much more on the &amp;quot;cpus-per-task&amp;quot; switch.&lt;br /&gt;
* &amp;lt;code&amp;gt;--ntasks=1&amp;lt;/code&amp;gt; tells Slurm that I only need to run 1 task. The [[AdvancedSlurm]] page has much more on the &amp;quot;ntasks&amp;quot; switch.&lt;br /&gt;
* &amp;lt;code&amp;gt;--nodes=1&amp;lt;/code&amp;gt; tells Slurm that this must be run on one machine. The [[AdvancedSlurm]] page has much more on the &amp;quot;nodes&amp;quot; switch.&lt;br /&gt;
* &amp;lt;code&amp;gt;--nodes=4 --ntasks-per-node=16 --constraint=elves&amp;lt;/code&amp;gt; requests 4 nodes with 16 cores on each and to only use the Elves.&lt;br /&gt;
&lt;br /&gt;
 % '''ls'''&lt;br /&gt;
 myhost.sh&lt;br /&gt;
 % '''sbatch --time=1 --mem-per-cpu=512M --cpus-per-task=1 --ntasks=1 --nodes=1 ./myhost.sh'''&lt;br /&gt;
 salloc: Granted job allocation 1483446&lt;br /&gt;
&lt;br /&gt;
Since this is such a small job, it is likely to be scheduled almost immediately, so a minute or so later, I now see&lt;br /&gt;
 % '''ls'''&lt;br /&gt;
 myhost.sh&lt;br /&gt;
 slurm-1483446.out&lt;br /&gt;
&lt;br /&gt;
 % '''cat slurm-1483446.out'''&lt;br /&gt;
 mage03&lt;br /&gt;
&lt;br /&gt;
== Monitoring Your Job ==&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;B&amp;gt;kstat&amp;lt;/B&amp;gt; perl script has been developed at K-State to provide you with all the available information about your jobs on Beocat.  &amp;lt;B&amp;gt;kstat --help&amp;lt;/B&amp;gt; will give you a full description of how to use it.&lt;br /&gt;
The Slurm version of kstat is very similar to the SGE version, with the exception that the actual memory usage of each job is not always available so the&lt;br /&gt;
memory requested is reported, and the memory usage on each node is not always accurate since Slurm includes disk cache.  We are continuing to look&lt;br /&gt;
for better ways to get the memory usage for each job, but at the moment you may need to use [http://ganglia.beocat.ksu.edu/ Ganglia] and look at the&lt;br /&gt;
memory graph for the node you are running on to get an accurate idea of the memory being used by your application.&lt;br /&gt;
&lt;br /&gt;
Eos&amp;gt;  kstat --help&lt;br /&gt;
&lt;br /&gt;
 USAGE: kstat [-q] [-c] [-g] [-l] [-u user] [-p NaMD] [-j 1234567] [--part partition]&lt;br /&gt;
       kstat alone dumps all info except for the core summaries&lt;br /&gt;
       choose -q -c for only specific info on queued or core summaries.&lt;br /&gt;
       then specify any searchables for the user, program name, or job id&lt;br /&gt;
 &lt;br /&gt;
 kstat                 info on running and queued jobs&lt;br /&gt;
 kstat -q              info on the queued jobs only&lt;br /&gt;
 kstat -c              core usage for each user&lt;br /&gt;
 kstat -g              gpu nodes only&lt;br /&gt;
 kstat -l -h           long list - prints full node list&lt;br /&gt;
 kstat -u daveturner   job info for one user only&lt;br /&gt;
 kstat --me            job info for my jobs only&lt;br /&gt;
 kstat -j 1234567      info on a given job id&lt;br /&gt;
 kstat --nocolor       do not use any color&lt;br /&gt;
 &lt;br /&gt;
 --------------------------------------------------------------------------&lt;br /&gt;
   Multi-node jobs are highlighted in Magenta&lt;br /&gt;
      The switch and nodes/switch are on the right&lt;br /&gt;
      highlighted in Yellow when nodes are spread across multiple switches&lt;br /&gt;
   Shared jobs are highlighted in Cyan&lt;br /&gt;
   Memory requested is reported along with the total used when available&lt;br /&gt;
      Total RSS / Total VMSize / Total requested&lt;br /&gt;
   Runtime is colorized with yellow then red for jobs nearing their time limit&lt;br /&gt;
   Time in the queue is colorized yellow then red for jobs waiting long times&lt;br /&gt;
 --------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
kstat can be used to give you a summary of your jobs that are running and in the queue:&amp;lt;BR&amp;gt;&lt;br /&gt;
&amp;lt;B&amp;gt;Eos&amp;gt;  kstat --me&amp;lt;/B&amp;gt;&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;&lt;br /&gt;
&amp;lt;font color=Brown&amp;gt;Hero43 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=Blue&amp;gt;24 of 24 cores &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;Load 23.4 / 24 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=Red&amp;gt;495.3 / 512 GB used&amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;font color=lightgreen&amp;gt;daveturner &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;unafold &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 1234567 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=cyan&amp;gt;1 core &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;running &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 4gb req &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 0 d  5 h 35 m &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;daveturner &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;octopus &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 1234568 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=cyan&amp;gt;16 core &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;running &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt; 128gb req &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 8 d 15 h 42 m &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;font color=green&amp;gt; ##################################   BeoCat Queue    ################################### &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&amp;lt;font color=green&amp;gt;daveturner &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt;NetPIPE &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; 1234569 &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=cyan&amp;gt;2 core &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=red&amp;gt; PD &amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 2h &amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 4gb req &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;/font&amp;gt;&lt;br /&gt;
&amp;lt;font color=black&amp;gt; 0 d 1 h 2 m &amp;lt;/font&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;kstat&amp;lt;/b&amp;gt; produces a separate line for each host.  Use &amp;lt;b&amp;gt;kstat -h&amp;lt;/b&amp;gt; to see information on all hosts without the jobs.&lt;br /&gt;
For the example above we are listing our jobs and the hosts they are on.&lt;br /&gt;
&lt;br /&gt;
Core usage - yellow for empty, red for empty on owned nodes, cyan for partially used, blue for all cores used.&amp;lt;BR&amp;gt;&lt;br /&gt;
Load level - yellow or yellow background indicates the node is being inefficiently used.  Red just means more threads than cores.&amp;lt;br&amp;gt;&lt;br /&gt;
Memory usage - yellow or red means most memory is used.&amp;lt;BR&amp;gt;&lt;br /&gt;
If the node is owned the group name will be in orange on the right.  Killable jobs of 24 hours or less can still be run on those nodes.&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Each job line will contain the username, program name, job ID, number of cores, the status which may be colored red for killable jobs, &lt;br /&gt;
the maximum memory used or memory requested, and the amount of time the job has run.  &lt;br /&gt;
Jobs in the queue may contain information on the requested memory and run time, priority access, constraints, and&lt;br /&gt;
how long the job has been in the queue.&lt;br /&gt;
In this case, I have 2 jobs running on Hero43.  &amp;lt;i&amp;gt;unafold&amp;lt;/i&amp;gt; is using 1 core while &amp;lt;i&amp;gt;octopus&amp;lt;/i&amp;gt; is using 16 cores.  Slurm did not provide&lt;br /&gt;
any information on the actual memory use so the memory request is reported  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;Detailed information about a single job&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
kstat can provide a get a great deal of information on a particular job including a very rough estimate of when it will run.  This time is a worst case scenario as this will&lt;br /&gt;
be adapted as other jobs finish early.  This is a good way to check for job submission problems before contacting us.  kstat colorizes the more important&lt;br /&gt;
information to make it easier to identify.&lt;br /&gt;
&lt;br /&gt;
Eos&amp;gt;  kstat -j 157054&lt;br /&gt;
 &lt;br /&gt;
 ##################################   Beocat Queue    ###################################&lt;br /&gt;
  daveturner  netpipe     157054   64 cores  PD       dwarves fabric  CS HPC     8gb req   0 d  0 h  0 m&lt;br /&gt;
 &lt;br /&gt;
 JobId 157054  Job Name  netpipe&lt;br /&gt;
   UserId=daveturner GroupId=daveturner_users(2117) MCS_label=N/A&lt;br /&gt;
   Priority=11112 Nice=0 Account=ksu-cis-hpc QOS=normal&lt;br /&gt;
   Status=PENDING Reason=Resources Dependency=(null)&lt;br /&gt;
   Requeue=1 Restarts=0 BatchFlag=1 Reboot=0 ExitCode=0:0&lt;br /&gt;
   RunTime=00:00:00 TimeLimit=00:40:00 TimeMin=N/A&lt;br /&gt;
   SubmitTime=2018-02-02T18:18:31 EligibleTime=2018-02-02T18:18:31&lt;br /&gt;
   Estimated Start Time is 2018-02-03T06:17:49 EndTime=2018-02-03T06:57:49 Deadline=N/A&lt;br /&gt;
   PreemptTime=None SuspendTime=None SecsPreSuspend=0&lt;br /&gt;
   Partitions killable.q,ksu-cis-hpc.q AllocNode:Sid=eos:1761&lt;br /&gt;
   ReqNodeList=(null) ExcNodeList=(null)&lt;br /&gt;
   NodeList=(null) SchedNodeList=dwarf[01-02]&lt;br /&gt;
   NumNodes=2-2 NumCPUs=64 NumTasks=64 CPUs/Task=1 ReqB:S:C:T=0:0:*:*&lt;br /&gt;
   TRES 2 nodes 64 cores 8192  mem gres/fabric 2&lt;br /&gt;
   Socks/Node=* NtasksPerN:B:S:C=32:0:*:* CoreSpec=*&lt;br /&gt;
   MinCPUsNode=32 MinMemoryNode=4G MinTmpDiskNode=0&lt;br /&gt;
   Constraint=dwarves DelayBoot=00:00:00&lt;br /&gt;
   Gres=fabric Reservation=(null)&lt;br /&gt;
   OverSubscribe=OK Contiguous=0 Licenses=(null) Network=(null)&lt;br /&gt;
   Slurm script  /homes/daveturner/perf/NetPIPE-5.x/sb.np&lt;br /&gt;
   WorkDir=/homes/daveturner/perf/NetPIPE-5.x&lt;br /&gt;
   StdErr=/homes/daveturner/perf/NetPIPE-5.x/0.o157054&lt;br /&gt;
   StdIn=/dev/null&lt;br /&gt;
   StdOut=/homes/daveturner/perf/NetPIPE-5.x/0.o157054&lt;br /&gt;
   Switches=1@00:05:00&lt;br /&gt;
 &lt;br /&gt;
 #!/bin/bash -l&lt;br /&gt;
 #SBATCH --job-name=netpipe&lt;br /&gt;
 #SBATCH -o 0.o%j&lt;br /&gt;
 #SBATCH --time=0:40:00&lt;br /&gt;
 #SBATCH --mem=4G&lt;br /&gt;
 #SBATCH --switches=1&lt;br /&gt;
 #SBATCH --nodes=2&lt;br /&gt;
 #SBATCH --constraint=dwarves&lt;br /&gt;
 #SBATCH --ntasks-per-node=32&lt;br /&gt;
 #SBATCH --gres=fabric:roce:1&lt;br /&gt;
 &lt;br /&gt;
 host=`echo $SLURM_JOB_NODELIST | sed s/[^a-z0-9]/\ /g | cut -f 1 -d ' '`&lt;br /&gt;
 nprocs=$SLURM_NTASKS&lt;br /&gt;
 openmpi_hostfile.pl $SLURM_JOB_NODELIST 1 hf.$host&lt;br /&gt;
 opts=&amp;quot;--printhostnames --quick --pert 3&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;*******************************************************************&amp;quot;&lt;br /&gt;
 echo &amp;quot;Running on $SLURM_NNODES nodes $nprocs cores on nodes $SLURM_JOB_NODELIST&amp;quot;&lt;br /&gt;
 echo &amp;quot;*******************************************************************&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 mpirun -np 2 --hostfile hf.$host NPmpi $opts -o np.${host}.mpi&lt;br /&gt;
 mpirun -np 2 --hostfile hf.$host NPmpi $opts -o np.${host}.mpi.bi --async --bidir&lt;br /&gt;
 mpirun -np $nprocs NPmpi $opts -o np.${host}.mpi$nprocs --async --bidir&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;Completed jobs and memory usage&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
kstat -d #&lt;br /&gt;
&lt;br /&gt;
This will provide information on the jobs you have currently running and those that have completed&lt;br /&gt;
in the last '#' days.  This is currently the only reliable way to get the memory used per node for your job.&lt;br /&gt;
This also provides information on whether the job completed normally, was canceled with &amp;lt;I&amp;gt;scancel&amp;lt;/I&amp;gt;, &lt;br /&gt;
timed out, or was killed because it exceeded its memory request.&lt;br /&gt;
&lt;br /&gt;
Eos&amp;gt;  kstat -d 10&lt;br /&gt;
&lt;br /&gt;
 ###########################  sacct -u daveturner  for 10 days  ###########################&lt;br /&gt;
                                      max gb used on a node /   gb requested per node&lt;br /&gt;
  193037   ADF         dwarf43           1 n  32 c   30.46gb/100gb    05:15:34  COMPLETED&lt;br /&gt;
  193289   ADF         dwarf33           1 n  32 c   26.42gb/100gb    00:50:43  CANCELLED&lt;br /&gt;
  195171   ADF         dwarf44           1 n  32 c   56.81gb/120gb    14:43:35  COMPLETED&lt;br /&gt;
  209518   matlab      dwarf36           1 n   1 c    0.00gb/  4gb    00:00:02  FAILED&lt;br /&gt;
&lt;br /&gt;
&amp;lt;B&amp;gt;Summary of core usage&amp;lt;/B&amp;gt;&lt;br /&gt;
&lt;br /&gt;
kstat can also provide a listing of the core usage and cores requested for each user.&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Eos&amp;gt;  kstat -c&lt;br /&gt;
 &lt;br /&gt;
 ##############################   Core usage    ###############################&lt;br /&gt;
   antariksh       1512 cores   %25.1 used     41528 cores queued&lt;br /&gt;
   bahadori         432 cores   % 7.2 used        80 cores queued&lt;br /&gt;
   eegoetz            0 cores   % 0.0 used         2 cores queued&lt;br /&gt;
   fahrialkan        24 cores   % 0.4 used        32 cores queued&lt;br /&gt;
   gowri             66 cores   % 1.1 used        32 cores queued&lt;br /&gt;
   jeffcomer        160 cores   % 2.7 used         0 cores queued&lt;br /&gt;
   ldcoates12        80 cores   % 1.3 used       112 cores queued&lt;br /&gt;
   lukesteg         464 cores   % 7.7 used         0 cores queued&lt;br /&gt;
   mike5454        1060 cores   %17.6 used       852 cores queued&lt;br /&gt;
   nilusha          344 cores   % 5.7 used         0 cores queued&lt;br /&gt;
   nnshan2014       136 cores   % 2.3 used         0 cores queued&lt;br /&gt;
   ploetz           264 cores   % 4.4 used        60 cores queued&lt;br /&gt;
   sadish           812 cores   %13.5 used         0 cores queued&lt;br /&gt;
   sandung           72 cores   % 1.2 used        56 cores queued&lt;br /&gt;
   zhiguang          80 cores   % 1.3 used       688 cores queued&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you want to read more, continue on to our [[AdvancedSlurm]] page.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=373</id>
		<title>AdvancedSlurm</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=AdvancedSlurm&amp;diff=373"/>
		<updated>2018-03-13T16:27:19Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Clarified how to delete a job&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SlurmBasics]] page, we have a couple other requestable resources:&lt;br /&gt;
 Valid gres options are:&lt;br /&gt;
 gpu[[:type]:count]&lt;br /&gt;
 fabric[[:type]:count]&lt;br /&gt;
Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;srun --gres=help&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Fabric ===&lt;br /&gt;
We currently offer 3 &amp;quot;fabrics&amp;quot; as request-able resources in Slurm. The &amp;quot;count&amp;quot; specified is the line-rate (in Gigabits-per-second) of the connection on the node.&lt;br /&gt;
==== Infiniband ====&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. InfiniBand does absolutely no good if running on a single machine. InfiniBand is a high-speed host-to-host communication fabric. It is (most-often) used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested InfiniBand, and all the nodes with InfiniBand were currently busy. In fact, some of our fastest nodes do not have InfiniBand, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;--gres=fabric:ib:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
==== ROCE ====&lt;br /&gt;
ROCE, like InfiniBand is a high-speed host-to-host communication layer. Again, used most often with MPI. Most of our nodes are ROCE enabled, but this will let you guarantee the nodes allocated to your job will be able to communicate with ROCE. To request ROCE, add &amp;lt;tt&amp;gt;--gres=fabric:roce:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
&lt;br /&gt;
==== Ethernet ====&lt;br /&gt;
Ethernet is another communication fabric. All of our nodes are connected by ethernet, this is simply here to allow you to specify the interconnect speed. Speeds are selected in units of Gbps, with all nodes supporting 1Gbps or above. The currently available speeds for ethernet are: &amp;lt;tt&amp;gt;1, 10, 40, and 100&amp;lt;/tt&amp;gt;. To select nodes with 40Gbps and above, you could specify &amp;lt;tt&amp;gt;--gres=fabric:eth:40&amp;lt;/tt&amp;gt; on your sbatch command-line.  Since ethernet is used to connect to the file server, this can be used to select nodes that have fast access for applications doing heavy IO.  The Dwarves and Heroes have 40 Gbps ethernet and we measure single stream performance as high as 20 Gbps, but if your application&lt;br /&gt;
requires heavy IO then you'd want to avoid the Moles which are connected to the file server with only 1 Gbps ethernet.&lt;br /&gt;
&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. We have a very small number of nodes which have GPUs installed. To request one of these gpus on of of these nodes, add &amp;lt;tt&amp;gt;--gres=gpu:1&amp;lt;/tt&amp;gt; to your sbatch command-line.&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
Intranode jobs which run on many cores in the same node are easier to code and can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or Java's threads. Many times, your program will need to know how many cores you want it to use. Many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the sbatch directives '&amp;lt;tt&amp;gt;--cpus-per-task=n&amp;lt;/tt&amp;gt;' or '&amp;lt;tt&amp;gt;--nodes=1 --ntasks-per-node=n&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $SLURM_CPUS_ON_NODE to tell how many cores you've been allocated.&lt;br /&gt;
&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
Communicating between nodes is trickier than talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI, but MPI also allows an application to run on multiple cores within a node. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;--cpus-per-task=''n''&amp;lt;/tt&amp;gt;', you would use '&amp;lt;tt&amp;gt;--nodes=''n'' --tasks-per-node=''m''&amp;lt;/tt&amp;gt;' ''or'' '&amp;lt;tt&amp;gt;--ntasks=''o''&amp;lt;/tt&amp;gt;' for your sbatch request, where ''n'' is the number of nodes you want, ''m'' is the number of cores per node you need, and ''o'' is the total number of cores you need.&lt;br /&gt;
&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--nodes=6 --ntasks-per-node=4&amp;lt;/tt&amp;gt; will give you 4 cores on each of 6 nodes for a total of 24 cores.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=40&amp;lt;/tt&amp;gt; will give you 40 cores spread across any number of nodes.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--ntasks=100&amp;lt;/tt&amp;gt; will give you 100 cores on any number of nodes.&lt;br /&gt;
&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
Memory requests are easiest when they are specified '''per core'''. For instance, if you specified the following: '&amp;lt;tt&amp;gt;--tasks=20 --mem-per-core=20G&amp;lt;/tt&amp;gt;', your job would have access to 400GB of memory total.&lt;br /&gt;
== Other Handy Slurm Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have Slurm email you when a job changes its status. This takes may need two directives to sbatch:  &amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt;.&lt;br /&gt;
==== --mail-type ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-type&amp;lt;/tt&amp;gt; is used to tell Slurm to notify you about certain conditions. Options are comma separated and include the following&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Option!!Explanation&lt;br /&gt;
|-&lt;br /&gt;
| NONE || This disables event-based mail&lt;br /&gt;
|-&lt;br /&gt;
| BEGIN || Sends a notification when the job begins&lt;br /&gt;
|-&lt;br /&gt;
| END || Sends a notification when the job ends&lt;br /&gt;
|-&lt;br /&gt;
| FAIL || Sends a notification when the job fails.&lt;br /&gt;
|-&lt;br /&gt;
| REQUEUE || Sends a notification if the job is put back into the queue from a running state&lt;br /&gt;
|-&lt;br /&gt;
| STAGE_OUT || Burst buffer stage out and teardown completed&lt;br /&gt;
|-&lt;br /&gt;
| ALL || Equivalent to BEGIN,END,FAIL,REQUEUE,STAGE_OUT&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT || Notifies if the job ran out of time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_90 || Notifies when the job has used 90% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_80 || Notifies when the job has used 80% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| TIME_LIMIT_50 || Notifies when the job has used 50% of its allocated time&lt;br /&gt;
|-&lt;br /&gt;
| ARRAY_TASKS || Modifies the BEGIN, END, and FAIL options to apply to each array task (instead of notifying for the entire job&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==== --mail-user ====&lt;br /&gt;
&amp;lt;tt&amp;gt;--mail-user&amp;lt;/tt&amp;gt; is optional. It is only needed if you intend to send these job status updates to a different e-mail address than what you provided in the [https://acount.beocat.ksu.edu/user Account Request Page]. It is specified with the following arguments to sbatch: &amp;lt;tt&amp;gt;--mail-user=someone@somecompany.com&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-J ''JobName''&amp;lt;/tt&amp;gt;' sbatch directive.&lt;br /&gt;
&lt;br /&gt;
=== Separating Output Streams ===&lt;br /&gt;
Normally, Slurm will create one output file, containing both STDERR and STDOUT. If you want both of these to be separated into two files, you can use the sbatch directives '&amp;lt;tt&amp;gt;--output&amp;lt;/tt&amp;gt;' and '&amp;lt;tt&amp;gt;--error&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! option !! default !! example&lt;br /&gt;
|-&lt;br /&gt;
| --output || slurm-%j.out || slurm-206.out&lt;br /&gt;
|-&lt;br /&gt;
| --error || slurm-%j.out || slurm-206.out&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;tt&amp;gt;%j&amp;lt;/tt&amp;gt; above indicates that it should be replaced with the job id.&lt;br /&gt;
&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== Running in a specific class of machine ===&lt;br /&gt;
If you want to run on a specific class of machines, e.g., the Dwarves, you can add the flag &amp;quot;--constraint=dwarves&amp;quot; to select any of those machines.&lt;br /&gt;
&lt;br /&gt;
=== Processor Constraints ===&lt;br /&gt;
Because Beocat is a heterogenous cluster (we have machines from many years in the cluster), not all of our processors support every new and fancy feature. You might have some applications that require some newer processor features, so we provide a mechanism to request those.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;--contraint&amp;lt;/tt&amp;gt; tells the cluster to apply constraints to the types of nodes that the job can run on. For instance, we know of several applications that must be run on chips that have &amp;quot;AVX&amp;quot; processor extensions. To do that, you would specify &amp;lt;tt&amp;gt;--constraint=avx&amp;lt;/tt&amp;gt; on you ''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt;'' '''or''' ''&amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;'' command lines.&lt;br /&gt;
Using &amp;lt;tt&amp;gt;--constraint=AVX&amp;lt;/tt&amp;gt; will prohibit your job from running on the Mages while &amp;lt;tt&amp;gt;--contraint=AVX2&amp;lt;/tt&amp;gt; will eliminate the Elves as well as the Mages.&lt;br /&gt;
&lt;br /&gt;
=== Slurm Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that Slurm makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
CUDA_VISIBLE_DEVICES=NoDevFiles&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
GPU_DEVICE_ORDINAL=NoDevFiles&lt;br /&gt;
HOSTNAME=dwarf37&lt;br /&gt;
SLURM_CHECKPOINT_IMAGE_DIR=/var/slurm/checkpoint&lt;br /&gt;
SLURM_CLUSTER_NAME=beocat&lt;br /&gt;
SLURM_CPUS_ON_NODE=1&lt;br /&gt;
SLURM_DISTRIBUTION=cyclic&lt;br /&gt;
SLURMD_NODENAME=dwarf37&lt;br /&gt;
SLURM_GTIDS=0&lt;br /&gt;
SLURM_JOB_CPUS_PER_NODE=1&lt;br /&gt;
SLURM_JOB_GID=163587&lt;br /&gt;
SLURM_JOB_ID=202&lt;br /&gt;
SLURM_JOBID=202&lt;br /&gt;
SLURM_JOB_NAME=slurm_simple.sh&lt;br /&gt;
SLURM_JOB_NODELIST=dwarf37&lt;br /&gt;
SLURM_JOB_NUM_NODES=1&lt;br /&gt;
SLURM_JOB_PARTITION=batch.q,killable.q&lt;br /&gt;
SLURM_JOB_QOS=normal&lt;br /&gt;
SLURM_JOB_UID=163587&lt;br /&gt;
SLURM_JOB_USER=mozes&lt;br /&gt;
SLURM_LAUNCH_NODE_IPADDR=10.5.16.37&lt;br /&gt;
SLURM_LOCALID=0&lt;br /&gt;
SLURM_MEM_PER_NODE=1024&lt;br /&gt;
SLURM_NNODES=1&lt;br /&gt;
SLURM_NODEID=0&lt;br /&gt;
SLURM_NODELIST=dwarf37&lt;br /&gt;
SLURM_NPROCS=1&lt;br /&gt;
SLURM_NTASKS=1&lt;br /&gt;
SLURM_PRIO_PROCESS=0&lt;br /&gt;
SLURM_PROCID=0&lt;br /&gt;
SLURM_SRUN_COMM_HOST=10.5.16.37&lt;br /&gt;
SLURM_SRUN_COMM_PORT=37975&lt;br /&gt;
SLURM_STEP_ID=0&lt;br /&gt;
SLURM_STEPID=0&lt;br /&gt;
SLURM_STEP_LAUNCHER_PORT=37975&lt;br /&gt;
SLURM_STEP_NODELIST=dwarf37&lt;br /&gt;
SLURM_STEP_NUM_NODES=1&lt;br /&gt;
SLURM_STEP_NUM_TASKS=1&lt;br /&gt;
SLURM_STEP_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_SUBMIT_DIR=/homes/mozes&lt;br /&gt;
SLURM_SUBMIT_HOST=dwarf37&lt;br /&gt;
SLURM_TASK_PID=23408&lt;br /&gt;
SLURM_TASKS_PER_NODE=1&lt;br /&gt;
SLURM_TOPOLOGY_ADDR=due1121-prod-core-40g-a1,due1121-prod-core-40g-c1.due1121-prod-sw-100g-a9.dwarf37&lt;br /&gt;
SLURM_TOPOLOGY_ADDR_PATTERN=switch.switch.node&lt;br /&gt;
SLURM_UMASK=0022&lt;br /&gt;
SRUN_DEBUG=3&lt;br /&gt;
TERM=screen-256color&lt;br /&gt;
TMPDIR=/tmp&lt;br /&gt;
USER=mozes&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a job. You would checkout the SLURM_JOB_NODELIST to know that. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $SLURM_CPUS_ON_NODE, $HOSTNAME, and $SLURM_JOB_ID.&lt;br /&gt;
&lt;br /&gt;
== Running from a sbatch Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'sbatch -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample sbatch script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of sbatch, lines beginning with #SBATCH are commands for sbatch&lt;br /&gt;
## itself, so I have taken the convention here of starting *every* line with a&lt;br /&gt;
## '#', just Delete the first one if you want to use that line, and then modify&lt;br /&gt;
## it to your own purposes. The only exception here is the first line, which&lt;br /&gt;
## *must* be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##SBATCH --mem-per-cpu=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime in DD-HH:MM:SS form. Default is 1 hour (1:00:00)&lt;br /&gt;
##SBATCH --time=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it.&lt;br /&gt;
##SBATCH --gres=fabric:ib:1&lt;br /&gt;
&lt;br /&gt;
## GPU directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
##SBATCH --gres:gpu:1&lt;br /&gt;
&lt;br /&gt;
## number of cores/nodes:&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocat@cs.ksu.edu to see how we can assist in&lt;br /&gt;
## getting your job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
##SBATCH --cpus-per-task=1&lt;br /&gt;
##SBATCH --cpus-per-task=12&lt;br /&gt;
##SBATCH --nodes=2 --tasks-per-node=1&lt;br /&gt;
##SBATCH --tasks=20&lt;br /&gt;
&lt;br /&gt;
## Constraints for this job. Maybe you need to run on the elves&lt;br /&gt;
##SBATCH --constraint=elves&lt;br /&gt;
## or perhaps you just need avx processor extensions&lt;br /&gt;
##SBATCH --constraint=avx&lt;br /&gt;
&lt;br /&gt;
## Output file name. Default is slurm-%j.out where %j is the job id.&lt;br /&gt;
##SBATCH --output=MyJobTitle.o%j&lt;br /&gt;
&lt;br /&gt;
## Split the errors into a seperate file. Default is the same as output&lt;br /&gt;
##SBATCH --error=MyJobTitle.e%j&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##SBATCH -J MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&lt;br /&gt;
## Send email when certain criteria are met.&lt;br /&gt;
## Valid type values are NONE, BEGIN, END, FAIL, REQUEUE, ALL (equivalent to&lt;br /&gt;
## BEGIN, END, FAIL, REQUEUE,  and  STAGE_OUT),  STAGE_OUT  (burst buffer stage&lt;br /&gt;
## out and teardown completed), TIME_LIMIT, TIME_LIMIT_90 (reached 90 percent&lt;br /&gt;
## of time limit), TIME_LIMIT_80 (reached 80 percent of time limit),&lt;br /&gt;
## TIME_LIMIT_50 (reached 50 percent of time limit) and ARRAY_TASKS (send&lt;br /&gt;
## emails for each array task). Multiple type values may be specified in a&lt;br /&gt;
## comma separated list. Unless the  ARRAY_TASKS  option  is specified, mail&lt;br /&gt;
## notifications on job BEGIN, END and FAIL apply to a job array as a whole&lt;br /&gt;
## rather than generating individual email messages for each task in the job&lt;br /&gt;
## array.&lt;br /&gt;
##SBATCH --mail-type=ALL&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
## Default is to send the mail to the e-mail address entered on the account&lt;br /&gt;
## request form.&lt;br /&gt;
##SBATCH --mail-user myemail@ksu.edu&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== File Access ==&lt;br /&gt;
Beocat has a variety of options for storing and accessing your files.  &lt;br /&gt;
Every user has a home directory for general use which is limited in size, has decent file access performance,&lt;br /&gt;
and will soon be backed up nightly.  Larger files should be stored in the /bulk subdirectories which have the same decent performance&lt;br /&gt;
but are not backed up.  The /scratch file system will soon be implemented on a Lustre file system that will provide very fast&lt;br /&gt;
temporary file access.  When fast IO is critical to the application performance, access to the local disk on each node or to a&lt;br /&gt;
RAM disk are the best options.&lt;br /&gt;
&lt;br /&gt;
===Home directory===&lt;br /&gt;
&lt;br /&gt;
Every user has a &amp;lt;tt&amp;gt;/homes/''username''&amp;lt;/tt&amp;gt; directory that they drop into when they log into Beocat.  &lt;br /&gt;
The home directory is for general use and provides decent performance for most file IO.  &lt;br /&gt;
Disk space in each home directory is limited to 1 TB, so larger files should be kept in the /bulk&lt;br /&gt;
directory, and there is a limit of 100,000 files in each subdirectory in your account.&lt;br /&gt;
This file system is fully redundant, so 3 specific hard disks would need to fail before any data was lost.&lt;br /&gt;
All files will soon be backed up nightly to a separate file server in Nichols Hall, so if you do accidentally &lt;br /&gt;
delete something it can be recovered.&lt;br /&gt;
&lt;br /&gt;
===Bulk directory===&lt;br /&gt;
&lt;br /&gt;
Each user also has a &amp;lt;tt&amp;gt;/bulk/''username''&amp;lt;/tt&amp;gt; directory where large files should be stored.&lt;br /&gt;
File access is the same speed as for the home directories, and the same limit of 100,000 files&lt;br /&gt;
per subdirectory applies.  There is no limit to the disk space you can use in your bulk directory,&lt;br /&gt;
but the files there will not be backed up.  They are still redundantly stored so you don't need to&lt;br /&gt;
worry about losing data to hardware failures, just don't delete something by accident. Unused files will be automatically removed after two years.&lt;br /&gt;
If you need to back up large files in the bulk directory, talk to Dan Andresen (dan@ksu.edu) about&lt;br /&gt;
purchasing some hard disks for archival storage.&lt;br /&gt;
&lt;br /&gt;
===Scratch file system===&lt;br /&gt;
&lt;br /&gt;
The /scratch file system will soon be using the Lustre software which is much faster than the&lt;br /&gt;
speed of the file access on /homes or /bulk.  In order to use scratch, you first need to make a&lt;br /&gt;
directory for yourself.  Scratch offers greater speed, no limit to the size of files nor the number&lt;br /&gt;
of files in each subdirectory.  It is meant as temporary space for prepositioning files and accessing them&lt;br /&gt;
during runs.  Once runs are completed, any files that need to be kept should be moved to your home&lt;br /&gt;
or bulk directories since files on the scratch file system get purged after 30 days.  Lustre is faster than&lt;br /&gt;
the home and bulk file systems in part because it does not redundantly store files by striping them&lt;br /&gt;
across multiple disks, so if a hard disk fails data will be lost.  When we get scratch set up to use Lustre&lt;br /&gt;
we will post the difference in file access rates.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
mkdir /scratch/$USER&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Local disk===&lt;br /&gt;
&lt;br /&gt;
If you are running on a single node, it may also be faster to access your files from the local disk&lt;br /&gt;
on that node.  Each job creates a subdirectory /tmp/job# where '#' is the job ID number on the&lt;br /&gt;
local disk of each node the job uses.  This can be accessed simply by writing to /tmp rather than&lt;br /&gt;
needing to use /tmp/job#.  &lt;br /&gt;
&lt;br /&gt;
You may need to copy files to&lt;br /&gt;
local disk at the start of your script, or set the output directory for your application to point&lt;br /&gt;
to a file on the local disk, then you'll need to copy any files you want off the local disk before&lt;br /&gt;
the job finishes since Slurm will remove all files in your job's directory on /tmp on completion&lt;br /&gt;
of the job or when it aborts.  When we get the scratch file system working with Lustre, it may&lt;br /&gt;
end up being faster than accessing local disk so we will post the access rates for each.  Use 'kstat -l -h'&lt;br /&gt;
to see how much /tmp space is available on each node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files to the tmp directory if needed&lt;br /&gt;
cp $input_files /tmp&lt;br /&gt;
&lt;br /&gt;
# Make an 'out' directory to pass to the app if needed&lt;br /&gt;
mkdir /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Example of running an app and passing the tmp directory in/out&lt;br /&gt;
app -input_directory /tmp -output_directory /tmp/out&lt;br /&gt;
&lt;br /&gt;
# Copy the 'out' directory back to the current working directory after the run&lt;br /&gt;
cp -rp /tmp/out .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===RAM disk===&lt;br /&gt;
&lt;br /&gt;
If you need ultrafast access to files, you can use a RAM disk which is a file system set up in the &lt;br /&gt;
memory of the compute node you are running on.  The RAM disk is limited to the requested memory on that node, so you should account for this usage when you request &lt;br /&gt;
memory for your job. Below is an example of how to use the RAM disk.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Copy input files over if necessary&lt;br /&gt;
cp $any_input_files /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Run the application, possibly giving it the path to the RAM disk to use for output files&lt;br /&gt;
app -output_directory /dev/shm/&lt;br /&gt;
&lt;br /&gt;
# Copy files from the RAM disk to the current working directory and clean it up&lt;br /&gt;
cp /dev/shm/* .&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===When you leave KSU===&lt;br /&gt;
&lt;br /&gt;
If you are done with your account and leaving KSU, please clean up your directory, move any files&lt;br /&gt;
to your supervisor's account that need to be kept after you leave, and notify us so that we can disable your&lt;br /&gt;
account.  The easiest way to move your files to your supervisor's account is for them to set up&lt;br /&gt;
a subdirectory for you with the appropriate write permissions.  The example below shows moving &lt;br /&gt;
just a user's 'data' subdirectory to their supervisor.  The 'nohup' command is used so that the move will &lt;br /&gt;
continue even if the window you are doing the move from gets disconnected.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=bash&amp;gt;&lt;br /&gt;
# Supervisor:&lt;br /&gt;
mkdir /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
chmod ugo+w /bulk/$USER/$STUDENT_USERNAME&lt;br /&gt;
&lt;br /&gt;
# Student:&lt;br /&gt;
nohup mv /homes/$USER/data /bulk/$SUPERVISOR_USERNAME/$USER &amp;amp;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==File Sharing==&lt;br /&gt;
&lt;br /&gt;
This section will cover methods of sharing files with other users within Beocat and on remote systems.&lt;br /&gt;
&lt;br /&gt;
===Securing your home directory===&lt;br /&gt;
&lt;br /&gt;
By default your home directory is accessible to other users on Beocat for reading but not writing.  If you do not want others to have any&lt;br /&gt;
access to files in your home directory, you can set the permissions to restrict access to just yourself.&lt;br /&gt;
&lt;br /&gt;
 chmod go-rwx /homes/your_user_name&lt;br /&gt;
&lt;br /&gt;
This removes read, write, and execute permission to everyone but yourself.  Be aware that it may make it more difficult for us to help you out when&lt;br /&gt;
you run into problems.&lt;br /&gt;
&lt;br /&gt;
===Sharing files within your group===&lt;br /&gt;
&lt;br /&gt;
By default all your files and directories have a 'group' that is your user name followed by _users as 'ls -l' shows.&lt;br /&gt;
In my case they have the group of daveturner_users.&lt;br /&gt;
If your working group owns any nodes on Beocat, then you have a group name that can be used to securely share&lt;br /&gt;
files with others within your group.  Below is an example of creating a directory called 'share', changing the group&lt;br /&gt;
to ksu-cis-hpc (my group is ksu-cis-hpc so I submit jobs to --partition=ksu-cis-hpc.q), then changing the permissions to restrict access to &lt;br /&gt;
just that group.&lt;br /&gt;
&lt;br /&gt;
 mkdir share&lt;br /&gt;
 chgrp ksu-cis-hpc share&lt;br /&gt;
 chmod g+rx share&lt;br /&gt;
 chmod o-rwx share&lt;br /&gt;
&lt;br /&gt;
This will give people in your group the ability to read files in the 'share' directory.  If you also want&lt;br /&gt;
them to be able to write or modify files in that directory then use 'chmod g+rwx' instead.&lt;br /&gt;
&lt;br /&gt;
If you want to know what groups you belong to use the line below.&lt;br /&gt;
&lt;br /&gt;
 groups&lt;br /&gt;
&lt;br /&gt;
If your group does not own any nodes, you can still request a group name and manage the participants yourself.&lt;br /&gt;
&lt;br /&gt;
===Openly sharing files on the web===&lt;br /&gt;
&lt;br /&gt;
If  you create a 'public_html' directory on your home directory, then any files put there will be shared &lt;br /&gt;
openly on the web.  There is no way to restrict who has access to those files.&lt;br /&gt;
&lt;br /&gt;
 cd&lt;br /&gt;
 mkdir public_html&lt;br /&gt;
&lt;br /&gt;
Then access the data from a web browser using the URL:&lt;br /&gt;
&lt;br /&gt;
http://people.beocat.ksu.edu/~your_user_name&lt;br /&gt;
&lt;br /&gt;
This will show a list of the files you have in your public_html subdirectory.&lt;br /&gt;
&lt;br /&gt;
===Globus===&lt;br /&gt;
&lt;br /&gt;
Kyle will put some Globus stuff here&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of Slurm's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to sbatch.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  --array=n[-m[:s]]&lt;br /&gt;
     Submits a so called Array Job, i.e. an array of identical tasks being differentiated only by an index number and being treated by Slurm&lt;br /&gt;
     almost like a series of jobs. The option argument to --arrat specifies the number of array job tasks and the index number which will be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SLURM_ARRAY_TASK_ID. The option&lt;br /&gt;
     arguments n, and m will be available through the environment variables SLURM_ARRAY_TASK_MIN and SLURM_ARRAY_TASK_MAX.&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or a range with a step size.&lt;br /&gt;
     Hence, the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SLURM_ARRAY_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array jobs are commonly used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks follow a slightly different naming convention (which can be controlled in the same way as mentioned above).&lt;br /&gt;
 &lt;br /&gt;
     slurm-%A_%a.out&lt;br /&gt;
&lt;br /&gt;
     %A is the SLURM_ARRAY_JOB_ID, and %a is the SLURM_ARRAY_TASK_ID&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=50-200:50&lt;br /&gt;
RUNSIZE=$SLURM_ARRAY_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and Slurm understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     sbatch ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as sbatch has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --array=1:5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SLURM_ARRAY_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SLURM_ARRAY_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so sbatch doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'srun'. srun uses the exact same command-line arguments as sbatch, but you need to add the following arguments at the end: &amp;lt;tt&amp;gt;--pty bash&amp;lt;/tt&amp;gt;. If no node is available with your resource requirements, srun will tell you something like the following:&lt;br /&gt;
 srun --pty bash&lt;br /&gt;
 srun: Force Terminated job 217&lt;br /&gt;
 srun: error: CPU count per node can not be satisfied&lt;br /&gt;
 srun: error: Unable to allocate resources: Requested node configuration is not available&lt;br /&gt;
Note that, like sbatch, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
&lt;br /&gt;
== Connecting to an existing job ==&lt;br /&gt;
You can connect to an existing job using &amp;lt;B&amp;gt;srun&amp;lt;/B&amp;gt; in the same way that the &amp;lt;B&amp;gt;MonitorNode&amp;lt;/B&amp;gt; command&lt;br /&gt;
allowed us to in the old cluster.  This is essentially like using ssh to get into the node where your job is running which&lt;br /&gt;
can be very useful in allowing you to look at files in /tmp/job# or in running &amp;lt;B&amp;gt;htop&amp;lt;/B&amp;gt; to view the &lt;br /&gt;
activity level for your job.&lt;br /&gt;
&lt;br /&gt;
 srun --jobid=# --pty bash                        where '#' is the job ID number&lt;br /&gt;
&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job (using '''scancel ''jobid''''') and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
&lt;br /&gt;
As it is unsupported, this is an excercise left to the reader. A starting point is &amp;lt;tt&amp;gt;man scontrol&amp;lt;/tt&amp;gt;&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;-p killable.q&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;sbatch&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;srun&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;qdel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
Some users are members of projects that have contributed to Beocat. When those users have contributed nodes, you will need to include your project's &amp;quot;partition&amp;quot; in your job submission to be able to use those nodes.&lt;br /&gt;
&lt;br /&gt;
To determine the partitions you have access to, run &amp;lt;tt&amp;gt;sinfo -hso '%P'&amp;lt;/tt&amp;gt;&lt;br /&gt;
That will return a list that looks something like this:&lt;br /&gt;
 killable.q&lt;br /&gt;
 batch.q&lt;br /&gt;
 some-other-partition.q&lt;br /&gt;
&lt;br /&gt;
You can then alter your &amp;lt;tt&amp;gt;#SBATCH&amp;lt;/tt&amp;gt; lines to include your new partition:&lt;br /&gt;
 #SBATCH --partition=some-other-partition.q,batch.q&lt;br /&gt;
or&lt;br /&gt;
 #SBATCH --partition=some-other-partition.q,batch.q,killable.q&lt;br /&gt;
You can include 'killable.q' if you would like, reasons for doing so are available [[AdvancedSlurm#Killable_jobs|here]]&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The sacct tool will read Slurm's accounting database and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== sacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
sacct -j 1122334455 -l&lt;br /&gt;
# if you don't know the job id, you can look at your jobs started since some day:&lt;br /&gt;
sacct -S 2017-01-01&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
{{Scrolling table/top}}&lt;br /&gt;
{{Scrolling table/mid}}&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|218||218||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||12||00:00:00||FAILED||2:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=12,mem=1G,node=1||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.batch||218.batch||batch||||137940K||dwarf37||0||137940K||1576K||dwarf37||0||1576K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||1.36G||0||0||0||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|218.0||218.0||qqqqstat||||204212K||dwarf37||0||204212K||1420K||dwarf37||0||1420K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||12||00:00:00||FAILED||2:0||196.52M||Unknown||Unknown||Unknown||1Gn||0||0||dwarf37||65534||0||0.00M||dwarf37||0||0.00M||||||||cpu=12,mem=1G,node=1&lt;br /&gt;
{{Scrolling table/end}}&lt;br /&gt;
If you look at the columns showing Elapsed and State, you can see that they show 00:00:00 and FAILED respectively. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
{{Scrolling table/top}}&lt;br /&gt;
{{Scrolling table/mid}}&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|220||220||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:01:27||TIMEOUT||0:0||||Unknown||Unknown||Unknown||1Gn||||||||||||||||||||||||cpu=1,mem=1G,node=1||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.batch||220.batch||batch||||370716K||dwarf37||0||370716K||7060K||dwarf37||0||7060K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:28||CANCELLED||0:15||1.23G||0||0||0||1Gn||0||0.16M||dwarf37||0||0.16M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
|-&lt;br /&gt;
|220.0||220.0||sleep||||204212K||dwarf37||0||107916K||1000K||dwarf37||0||620K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:01:27||CANCELLED||0:15||1.54G||Unknown||Unknown||Unknown||1Gn||0||0.05M||dwarf37||0||0.05M||0.00M||dwarf37||0||0.00M||||||||cpu=1,mem=1G,node=1&lt;br /&gt;
{{Scrolling table/end}}&lt;br /&gt;
If you look at the column showing State, we can see some pointers to the issue. The job ran out of time (TIMEOUT) and then was killed (CANCELLED).&lt;br /&gt;
{{Scrolling table/top}}&lt;br /&gt;
{{Scrolling table/mid}}&lt;br /&gt;
!JobID!!JobIDRaw!!JobName!!Partition!!MaxVMSize!!MaxVMSizeNode!!MaxVMSizeTask!!AveVMSize!!MaxRSS!!MaxRSSNode!!MaxRSSTask!!AveRSS!!MaxPages!!MaxPagesNode!!MaxPagesTask!!AvePages!!MinCPU!!MinCPUNode!!MinCPUTask!!AveCPU!!NTasks!!AllocCPUS!!Elapsed!!State!!ExitCode!!AveCPUFreq!!ReqCPUFreqMin!!ReqCPUFreqMax!!ReqCPUFreqGov!!ReqMem!!ConsumedEnergy!!MaxDiskRead!!MaxDiskReadNode!!MaxDiskReadTask!!AveDiskRead!!MaxDiskWrite!!MaxDiskWriteNode!!MaxDiskWriteTask!!AveDiskWrite!!AllocGRES!!ReqGRES!!ReqTRES!!AllocTRES&lt;br /&gt;
|-&lt;br /&gt;
|221||221||slurm_simple.sh||batch.q||||||||||||||||||||||||||||||||||||1||00:00:00||CANCELLED by 0||0:0||||Unknown||Unknown||Unknown||1Mn||||||||||||||||||||||||cpu=1,mem=1M,node=1||cpu=1,mem=1M,node=1&lt;br /&gt;
|-&lt;br /&gt;
|221.batch||221.batch||batch||||137940K||dwarf37||0||137940K||1144K||dwarf37||0||1144K||0||dwarf37||0||0||00:00:00||dwarf37||0||00:00:00||1||1||00:00:01||CANCELLED||0:15||2.62G||0||0||0||1Mn||0||0||dwarf37||65534||0||0||dwarf37||65534||0||||||||cpu=1,mem=1M,node=1&lt;br /&gt;
{{Scrolling table/end}}&lt;br /&gt;
If you look at the column showing State, we see it was &amp;quot;CANCELLED by 0&amp;quot;, then we look at the AllocTRES column to see our allocated resources, and see that 1MB of memory was granted. Combine that with the column &amp;quot;MaxRSS&amp;quot; and we see that the memory granted was less than the memory we tried to use, thus the job was &amp;quot;CANCELLED&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=CUDA&amp;diff=234</id>
		<title>CUDA</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=CUDA&amp;diff=234"/>
		<updated>2017-10-16T19:25:21Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Changed info to deal with updates from Paladins to Dwarves&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== CUDA Overview ==&lt;br /&gt;
[[wikipedia:CUDA|CUDA]] is a feature set for programming nVidia [[wikipedia:Graphics_processing_unit|GPUs]]. We have 7 CUDA-enabled nodes. dwarf22, dwarf23, dwarf24, dwarf25, and dwarf35 each have two [https://www.nvidia.com/en-us/geforce/products/10series/geforce-gtx-1080-ti/ nVidia 1080 Ti graphics cards]. dwarf38 and dwarf39 each have a single [https://www.geforce.com/hardware/desktop-gpus/geforce-gtx-980-ti/specifications nVidia 980 Ti graphic card]. The former set of nodes is only available to 'killable&amp;quot; jobs for those outside the research group that purchased them. The latter are available for anybody, however, you should send an email to beocat@cs.ksu.edu with a request to be added to the GPU priority group.&lt;br /&gt;
&lt;br /&gt;
Note that both of these graphic cards are consumer-grade rather than the typical GPUs used in most high-performance computing centers. For single-precision computations, these cards are comparable to the high-end cards (at a fraction of the price), however double-precision computations are much slower.&lt;br /&gt;
&lt;br /&gt;
== Training videos ==&lt;br /&gt;
CUDA Programming Model Overview: [http://www.youtube.com/watch?v=aveYOlBSe-Y http://www.youtube.com/watch?v=aveYOlBSe-Y]&lt;br /&gt;
&amp;lt;HTML5video type=&amp;quot;youtube&amp;quot; width=&amp;quot;800&amp;quot; height=&amp;quot;480&amp;quot; autoplay=&amp;quot;false&amp;quot;&amp;gt;aveYOlBSe-Y&amp;lt;/HTML5video&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CUDA Programming Basics Part I (Host functions): [http://www.youtube.com/watch?v=79VARRFwQgY http://www.youtube.com/watch?v=79VARRFwQgY]&lt;br /&gt;
&amp;lt;HTML5video type=&amp;quot;youtube&amp;quot; width=&amp;quot;800&amp;quot; height=&amp;quot;480&amp;quot; autoplay=&amp;quot;false&amp;quot;&amp;gt;79VARRFwQgY&amp;lt;/HTML5video&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CUDA Programming Basics Part II (Device functions): [http://www.youtube.com/watch?v=G5-iI1ogDW4 http://www.youtube.com/watch?v=G5-iI1ogDW4]&lt;br /&gt;
&amp;lt;HTML5video type=&amp;quot;youtube&amp;quot; width=&amp;quot;800&amp;quot; height=&amp;quot;480&amp;quot; autoplay=&amp;quot;false&amp;quot;&amp;gt;G5-iI1ogDW4&amp;lt;/HTML5video&amp;gt;&lt;br /&gt;
== Compiling CUDA Applications ==&lt;br /&gt;
nvcc is the compiler for CUDA applications. When compiling your applications manually you will need to keep 3 things in mind:&lt;br /&gt;
&lt;br /&gt;
* The CUDA development headers are located here: /opt/cuda/sdk/common/inc&lt;br /&gt;
* The CUDA architecture is: sm_30&lt;br /&gt;
* The CUDA SDK is currently not available on the headnode. (compile on the nodes with CUDA, either in your jobscript or via &amp;lt;tt&amp;gt;qrsh -l cuda=TRUE&amp;lt;/tt&amp;gt;)&lt;br /&gt;
* '''Do not run your cuda applications on the headnode. I cannot guarantee it will run, and it will give you terrible results if it does run.'''&lt;br /&gt;
&lt;br /&gt;
Putting it all together you can compile CUDA applications as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nvcc -I  /opt/cuda/sdk/common/inc -arch sm_30 &amp;lt;source&amp;gt;.cu -o &amp;lt;output&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
== Example ==&lt;br /&gt;
=== Create your Application ===&lt;br /&gt;
Copy the following Application into Beocat as vecadd.cu&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
//  Kernel definition, see also section 4.2.3 of Nvidia Cuda Programming Guide&lt;br /&gt;
__global__  void vecAdd(float* A, float* B, float* C)&lt;br /&gt;
{&lt;br /&gt;
            // threadIdx.x is a built-in variable  provided by CUDA at runtime&lt;br /&gt;
            int i = threadIdx.x;&lt;br /&gt;
       A[i]=0;&lt;br /&gt;
       B[i]=i;&lt;br /&gt;
       C[i] = A[i] + B[i];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
#include  &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#define  SIZE 10&lt;br /&gt;
int  main()&lt;br /&gt;
{&lt;br /&gt;
   int N=SIZE;&lt;br /&gt;
   float A[SIZE], B[SIZE], C[SIZE];&lt;br /&gt;
   float *devPtrA;&lt;br /&gt;
   float *devPtrB;&lt;br /&gt;
   float *devPtrC;&lt;br /&gt;
   int memsize= SIZE * sizeof(float);&lt;br /&gt;
&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrA, memsize);&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrB, memsize);&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrC, memsize);&lt;br /&gt;
   cudaMemcpy(devPtrA, A, memsize,  cudaMemcpyHostToDevice);&lt;br /&gt;
   cudaMemcpy(devPtrB, B, memsize,  cudaMemcpyHostToDevice);&lt;br /&gt;
   // __global__ functions are called:  Func&amp;lt;&amp;lt;&amp;lt; Dg, Db, Ns  &amp;gt;&amp;gt;&amp;gt;(parameter);&lt;br /&gt;
   vecAdd&amp;lt;&amp;lt;&amp;lt;1, N&amp;gt;&amp;gt;&amp;gt;(devPtrA,  devPtrB, devPtrC);&lt;br /&gt;
   cudaMemcpy(C, devPtrC, memsize,  cudaMemcpyDeviceToHost);&lt;br /&gt;
&lt;br /&gt;
   for (int i=0; i&amp;lt;SIZE; i++)&lt;br /&gt;
        printf(&amp;quot;C[%d]=%f\n&amp;quot;,i,C[i]);&lt;br /&gt;
&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== Gain Access to a CUDA-capable Node ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
qrsh -l cuda=TRUE&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== Compile Your Application ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nvcc -I /opt/cuda/sdk/common/inc -arch sm_30 vecadd.cu -o vecadd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will create a program with the name 'vecadd' (specified by the '-o' flag).&lt;br /&gt;
=== Run Your Application ===&lt;br /&gt;
Run the program as you usually would, namely&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./vecadd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assuming you don't want to run the program interactively because this is a large job, you can submit a job via qsub, just be sure to add the '&amp;lt;tt&amp;gt;-l cuda=true&amp;lt;/tt&amp;gt;' directive.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=203</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=203"/>
		<updated>2016-12-23T16:31:38Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added &amp;quot;Institute for Computational Research&amp;quot; to the &amp;quot;What is&amp;quot; section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|HPC]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the Institute for Computational Research, which is a function of the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of Linux servers coordinated by the [https://arc.liv.ac.uk/trac/SGE SGE] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A comparatively small [[Hadoop]] cluster&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''beocat#beocat''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use SGE for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SGEBasics]] page for an introduction on how to submit your first job. If you are already familiar with SGE, we also have an [[AdvancedSGE]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [http://freenode.net/using_the_network.shtml freenode chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' and/or '''kylehutson''' in your message, and it should grab our attention. Available from a web browser [[Special:WebChat|here.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat.&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar&lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com&lt;br /&gt;
|color=711616&lt;br /&gt;
|view=AGENDA&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Tips_and_Tricks&amp;diff=202</id>
		<title>Tips and Tricks</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Tips_and_Tricks&amp;diff=202"/>
		<updated>2016-12-01T19:53:42Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Update to correct Ganglia link&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Beocat has a number of tools to make your work easier, some which you may not know about. This is a simple list of these programs and some basic usage scenarios.&lt;br /&gt;
&lt;br /&gt;
== Submitting your job to run the fastest ==&lt;br /&gt;
=== Size your jobs to use the fastest nodes ===&lt;br /&gt;
==== Specify the proper number of cores ====&lt;br /&gt;
Beocat (nor any other computer or cluster) can make your job run on more than one core at a time if your program isn't designed to take advantage of this. Many people think &amp;quot;I can run this on 40 cores and it will run 40 times faster&amp;quot;. This isn't true.&lt;br /&gt;
&lt;br /&gt;
While we have many programs that are designed to take advantage of multiple cores, do not assume this is the case&lt;br /&gt;
&lt;br /&gt;
==== Optimize your jobs for speed, not for number of cores ====&lt;br /&gt;
It seems that many people pick an arbitrary large number of cores for their jobs. 20 seems to be a common one. However, some of our fastest nodes have 16 cores. It's quite likely if your job will fit on an Elf (16 cores, 8 GB/RAM/core (64 GB RAM total)), it will run faster with 16 cores than by specifying more cores and having it run on slower nodes.&lt;br /&gt;
&lt;br /&gt;
==== Don't request resources you don't need ====&lt;br /&gt;
The most common culprit here is people specifying they need infiniband when the job is run on a single node. This limits the scheduling such that a perfectly good node for your job may be idle while your job is still waiting.&lt;br /&gt;
&lt;br /&gt;
== Programs that make using Beocat easier ==&lt;br /&gt;
=== [[wikipedia:nmon|nmon]] ===&lt;br /&gt;
The name is short for &amp;quot;Nigel's Monitor&amp;quot;, it's a program written by Nigel Griffiths from IBM.&lt;br /&gt;
=== [http://www.ibm.com/developerworks/aix/library/au-nmon_analyser/ nmon analyser] ===&lt;br /&gt;
A tool for producing graphs and spreadsheets from output generated by nmon.&lt;br /&gt;
=== [http://hisham.hm/htop/ htop] ===&lt;br /&gt;
A prettier, easier to use top. Shows CPU and memory usage in an easy-to-digest format.&lt;br /&gt;
=== [http://www.gnu.org/software/screen/ screen] ===&lt;br /&gt;
A sort of terminal multiplexer, allows you to run many terminal programs at once without mixing them up. Also allows you to disconnect and reconnect sessions. There is a good explanation of how to use screen at [http://www.mattcutts.com/blog/a-quick-tutorial-on-screen/ http://www.mattcutts.com/blog/a-quick-tutorial-on-screen/].&lt;br /&gt;
=== Ganglia ===&lt;br /&gt;
The web-based load monitoring tool for the cluster. [http://ganglia.beocat.cis.ksu.edu http://ganglia.beocat.ksu.edu] . From there, you can see how busy Beocat is.&lt;br /&gt;
=== [http://dag.wieers.com/home-made/dstat/ dstat] ===&lt;br /&gt;
A very detailed performance analyzer.&lt;br /&gt;
&lt;br /&gt;
== Increasing file write performance ==&lt;br /&gt;
Credit for this goes to [http://moo.nac.uci.edu/~hjm/bduc/BDUC_USER_HOWTO.html#writeperfongl http://moo.nac.uci.edu/~hjm/bduc/BDUC_USER_HOWTO.html#writeperfongl]&lt;br /&gt;
&lt;br /&gt;
=== Use gzip ===&lt;br /&gt;
If you have written your own code or are using an app that writes zillions of tiny chunks of data to STDOUT, and you are storing the results on Beocat, you should consider passing the output thru gzip to consolidate the writes into a continuous stream. If you don’t do this, each write will be considered a separate IO event and the write performance will suffer.&lt;br /&gt;
&lt;br /&gt;
If, however, the STDOUT is passed thru gzip, the wallclock runtime decreases even below the usual runtime and you end up with an output file that it already compressed to about 1/5 the usual size.&lt;br /&gt;
&lt;br /&gt;
The here’s how to do it:&lt;br /&gt;
&lt;br /&gt;
 someapp --opt1 --opt2 --input=/path/to/input_file | gzip &amp;gt; /path/to/output_file&lt;br /&gt;
=== Use named pipes ===&lt;br /&gt;
Named pipes are special files that don't actually write to the filesystem, and can be used to communicate between processess. Since these pipes are in memory rather than directly to disk, they can be used to buffer writes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Create the named pipe&lt;br /&gt;
mkfifo /path/to/MyNamedPipe&lt;br /&gt;
&lt;br /&gt;
# Write some data to it&lt;br /&gt;
MyProgram --infile=/path/to/InputData1 --outfile=/path/to/MyNamedPipe &amp;amp;&lt;br /&gt;
MyOtherProgram &amp;lt; /path/to/InputData2 &amp;gt; /path/to/MyNamedPipe&lt;br /&gt;
&lt;br /&gt;
# Extract the output&lt;br /&gt;
cat &amp;lt; /path/to/MyNamedPipe &amp;gt; $HOME/MyOutput&lt;br /&gt;
## OR, we could compress the output&lt;br /&gt;
gzip &amp;lt; /path/to/MyNamedPipe &amp;gt; $HOME/MyOutput.gz&lt;br /&gt;
&lt;br /&gt;
# Delete the named pipe like you would a file&lt;br /&gt;
rm /path/to/MyNamedPipe&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
One cautionary word. Unlike normal files, named pipes cannot be used between machines, but can be used among processes running on the same machine. So, if you're running an MPI job that will be running completely on one node, you could setup a named pipe and do all your writes to that pipe, and then flush it at the end, but if you're running a multi-node MPI job and your named pipe is on a shared filesystem (like $HOME), each process will need to flush its named pipe to a regular file before the job quits.&lt;br /&gt;
=== Use one big file instead of many small ones ===&lt;br /&gt;
This may seem to be a non-issue, but it's a performance problem we've seen on Beocat many times. I love the term coined by UCI at the link above. They call making many small files &amp;quot;Zillions Of Tiny files (ZOTfiles)&amp;quot;. Using files like this is an inefficient use of our shared resources. A tiny file by itself is no more inefficient than a huge one. If you have only 100bytes to store, store it in single file. However, the problems start compounding when there are many of them. Because of the way data is stored on disk, 10 MB stored in ZOTfiles of 100bytes each can easily take up NOT 10MB, but more than 400MB - 40 times more space. Worse, data stored in this manner makes many operations very slow - instead of looking up 1 directory entry, the OS has to look up 100,000. This means 100,000 times more disk head movement, with a concommittent decrease in performance and disk lifetime. We have had Beocat users with several million files of less than 1kB each. Just creating a directory listing with ls would take nearly 1/2 hour. Not only is that inefficient for you, but it also degrades the performance of everybody using that filesystem and degrades our backups as well.&lt;br /&gt;
&lt;br /&gt;
Please use large files instead of ZOTfiles any chance you can!&lt;br /&gt;
&lt;br /&gt;
As a defense against too much abuse of tiny files, there is a limit of 100,000 entries in any directory in our shared filesystem space.&lt;br /&gt;
&lt;br /&gt;
== Programming for Performance ==&lt;br /&gt;
=== BLAS ===&lt;br /&gt;
BLAS (Basic Linear Algebra Subroutines) is a standard set of linear algebra subroutines. The standard was set so that software could be written against a standardized library interface, and optimized libraries could be &amp;quot;plug-and-play.&amp;quot; There are lots of implementations of the BLAS libraries, with the most common ones being [http://software.intel.com/en-us/intel-mkl/ Intel's MKL] and [http://developer.amd.com/tools/cpu/acml/pages/default.aspx AMD's ACML].&lt;br /&gt;
&lt;br /&gt;
==== Beocat BLAS Libraries ====&lt;br /&gt;
Since BLAS is a modular standard, we have installed a few (free) BLAS libraries.&lt;br /&gt;
&lt;br /&gt;
* The BLAS reference library: An unoptimized reference library&lt;br /&gt;
* [http://developer.amd.com/tools/cpu/acml/pages/default.aspx AMD's ACML]: Optimized BLAS library for AMD systems&lt;br /&gt;
* [http://www.openblas.net OpenBLAS]: Optimized BLAS library for some AMD, and most Intel sytems&lt;br /&gt;
&lt;br /&gt;
The default BLAS library is OpenBLAS.&lt;br /&gt;
&lt;br /&gt;
==== Using a different BLAS library ====&lt;br /&gt;
If you want or need to use a different BLAS library, list the available libraries with 'ls -1 /etc/env.d/alternatives/blas' (Ignore _current and _current_list)&lt;br /&gt;
&lt;br /&gt;
 $ ls -1 /etc/env.d/alternatives/blas&lt;br /&gt;
 _current&lt;br /&gt;
 _current_list&lt;br /&gt;
 acml-gfortran64&lt;br /&gt;
 acml-gfortran64-openmp&lt;br /&gt;
 acml-ifort64&lt;br /&gt;
 acml-ifort64-openmp&lt;br /&gt;
 mkl32-dynamic&lt;br /&gt;
 mkl32-dynamic-openmp&lt;br /&gt;
 mkl32-gfortran&lt;br /&gt;
 mkl32-gfortran-openmp&lt;br /&gt;
 mkl32-intel&lt;br /&gt;
 mkl32-intel-openmp&lt;br /&gt;
 mkl64-dynamic&lt;br /&gt;
 mkl64-dynamic-openmp&lt;br /&gt;
 mkl64-gfortran&lt;br /&gt;
 mkl64-gfortran-openmp&lt;br /&gt;
 mkl64-int64-dynamic&lt;br /&gt;
 mkl64-int64-dynamic-openmp&lt;br /&gt;
 mkl64-int64-gfortran&lt;br /&gt;
 mkl64-int64-gfortran-openmp&lt;br /&gt;
 mkl64-int64-intel&lt;br /&gt;
 mkl64-int64-intel-openmp&lt;br /&gt;
 mkl64-intel&lt;br /&gt;
 mkl64-intel-openmp&lt;br /&gt;
 openblas-openmp&lt;br /&gt;
 reference&lt;br /&gt;
To change your default BLAS version you need to determine which shell you are using:&lt;br /&gt;
&lt;br /&gt;
===== CSH or TCSH =====&lt;br /&gt;
If your tool simply uses pkg-config to find the right blas, you can just run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
setenv PKG_CONFIG_PATH /etc/env.d/alternatives/blas/openblas-openmp/usr/lib64/pkgconfig&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Where the openblas-openmp is replaced with name of your preferred BLAS. You can put that line in your job script, or in your ~/.cshrc file.&lt;br /&gt;
&lt;br /&gt;
If it needs actual library names and options for the compiler, after you have run the above you can run these to get the right arguments/library names for your compiler&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pkg-config --cflags blas&lt;br /&gt;
pkg-config --libs blas&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
===== SH, BASH, or ZSH =====&lt;br /&gt;
If your tool simply uses pkg-config to find the right blas, you can just run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PKG_CONFIG_PATH=/etc/env.d/alternatives/blas/openblas-openmp/usr/lib64/pkgconfig&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Where the openblas-openmp is replaced with name of your preferred BLAS. Put the output of that script in your job script, or in your ~/.bashrc or ~/.zshrc file.&lt;br /&gt;
&lt;br /&gt;
If it needs actual library names and options for the compiler, after you have run the above you can run these to get the right arguments/library names for your compiler&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pkg-config --cflags blas&lt;br /&gt;
pkg-config --libs blas&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== LAPACK ===&lt;br /&gt;
LAPACK (Linear Algebra PACKage) is a standard set of linear algebra subroutines. Like BLAS, these are very optimized, but LAPACK handles a different set of functions. The standard was set so that software could be written against a standardized library interface, and optimized libraries could be &amp;quot;plug-and-play.&amp;quot; There are lots of implementations of the LAPACK libraries, with the most common ones being [http://software.intel.com/en-us/intel-mkl/ Intel's MKL] and [http://developer.amd.com/tools/cpu/acml/pages/default.aspx AMD's ACML].&lt;br /&gt;
&lt;br /&gt;
==== Beocat LAPACK Libraries ====&lt;br /&gt;
Since LAPACK is a modular standard, we have installed a few (free) LAPACK libraries.&lt;br /&gt;
* [http://www.netlib.org/lapack/ The LAPACK reference library]: An unoptimized reference library&lt;br /&gt;
* [http://developer.amd.com/tools/cpu/acml/pages/default.aspx AMD's ACML]: Optimized LAPACK library for AMD systems&lt;br /&gt;
&lt;br /&gt;
The default LAPACK library is ACML.&lt;br /&gt;
&lt;br /&gt;
==== Using a different LAPACK library ====&lt;br /&gt;
If you want or need to use a different LAPACK library, list the available libraries with 'ls -1 /etc/env.d/alternatives/lapack' (Ignore _current and _current_list)&lt;br /&gt;
&lt;br /&gt;
 $ ls -1 /etc/env.d/alternatives/lapack&lt;br /&gt;
 _current&lt;br /&gt;
 _current_list&lt;br /&gt;
 acml-gfortran64&lt;br /&gt;
 acml-gfortran64-openmp&lt;br /&gt;
 acml-ifort64&lt;br /&gt;
 acml-ifort64-openmp&lt;br /&gt;
 mkl32-dynamic&lt;br /&gt;
 mkl32-dynamic-openmp&lt;br /&gt;
 mkl32-gfortran&lt;br /&gt;
 mkl32-gfortran-openmp&lt;br /&gt;
 mkl32-intel&lt;br /&gt;
 mkl32-intel-openmp&lt;br /&gt;
 mkl64-dynamic&lt;br /&gt;
 mkl64-dynamic-openmp&lt;br /&gt;
 mkl64-gfortran&lt;br /&gt;
 mkl64-gfortran-openmp&lt;br /&gt;
 mkl64-int64-dynamic&lt;br /&gt;
 mkl64-int64-dynamic-openmp&lt;br /&gt;
 mkl64-int64-gfortran&lt;br /&gt;
 mkl64-int64-gfortran-openmp&lt;br /&gt;
 mkl64-int64-intel&lt;br /&gt;
 mkl64-int64-intel-openmp&lt;br /&gt;
 mkl64-intel&lt;br /&gt;
 mkl64-intel-openmp&lt;br /&gt;
 reference&lt;br /&gt;
To change your default LAPACK version you need to determine which shell you are using:&lt;br /&gt;
&lt;br /&gt;
===== CSH or TCSH =====&lt;br /&gt;
If your tool simply uses pkg-config to find the right lapack, you can just run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
setenv PKG_CONFIG_PATH /etc/env.d/alternatives/lapack/acml-ifort64/usr/lib64/pkgconfig&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Where the acml-ifort64 is replaced with name of your preferred LAPACK. You can put that line in your job script, or in your ~/.cshrc file.&lt;br /&gt;
&lt;br /&gt;
If it needs actual library names and options for the compiler, after you have run the above you can run these to get the right arguments/library names for your compiler&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pkg-config --cflags lapack&lt;br /&gt;
pkg-config --libs lapack&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
===== SH, BASH, or ZSH =====&lt;br /&gt;
If your tool simply uses pkg-config to find the right lapack, you can just run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export PKG_CONFIG_PATH=/etc/env.d/alternatives/lapack/acml-ifort64/usr/lib64/pkgconfig&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Where the acml-ifort64 is replaced with name of your preferred LAPACK. Put the output of that script in your job script, or in your ~/.bashrc or ~/.zshrc file.&lt;br /&gt;
&lt;br /&gt;
If it needs actual library names and options for the compiler, after you have run the above you can run these to get the right arguments/library names for your compiler&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pkg-config --cflags lapack&lt;br /&gt;
pkg-config --libs lapack&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://openmp.org/wp/ OpenMP] ===&lt;br /&gt;
OpenMP is a set of directives for C, C++, and Fortran which greatly simplifies parallelizing applications on a single node. There is a good tutorial for OpenMP at [https://computing.llnl.gov/tutorials/openMP/ https://computing.llnl.gov/tutorials/openMP/]&lt;br /&gt;
To compile an OpenMP-enabled program, you need to tell GCC that OpenMP is available this is done like:&lt;br /&gt;
 gcc -fopenmp myOpenMPprogram.c&lt;br /&gt;
By default OpenMP will use all available cores for its computation, which is a problem for shared resources like Beocat.&lt;br /&gt;
&lt;br /&gt;
To make use of only the cores assigned to you, you must first make sure you have requested the 'single' parallel environment and in your job script you will need something like the following (before the application you are trying to run):&lt;br /&gt;
&lt;br /&gt;
==== bash, sh, zsh ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export OMP_NUM_THREADS=${NSLOTS}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== csh or tcsh ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
setenv OMP_NUM_THREADS ${NSLOTS}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Beocat:About&amp;diff=201</id>
		<title>Beocat:About</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Beocat:About&amp;diff=201"/>
		<updated>2016-12-01T19:52:59Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Update CIS references to CS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://support.beocat.ksu.edu Beocat] is a medium-sized compute cluster in the [http://www.cs.ksu.edu Computer Science Department] of the [http://www.engg.ksu.edu College of Engineering] at [http://www.ksu.edu Kansas State University]. The project is overseen by [[User:dan|Dr. Andresen]] and managed by [[User:mozes|Adam Tygart]] and [[User:kylehutson|Kyle Hutson]].&lt;br /&gt;
&lt;br /&gt;
We support researchers from all over Kansas, and their collaborators (wherever they may be).&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OLD_DEPRECATED_AdvancedSGE&amp;diff=200</id>
		<title>OLD DEPRECATED AdvancedSGE</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OLD_DEPRECATED_AdvancedSGE&amp;diff=200"/>
		<updated>2016-12-01T19:51:25Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Updated references&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SGEBasics]] page, we have several other requestable resources. Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;qconf -sc | awk '{ if ($5 != &amp;quot;NO&amp;quot;) { print }}'&amp;lt;/tt&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
!name&lt;br /&gt;
!shortcut&lt;br /&gt;
!type&lt;br /&gt;
!relop&lt;br /&gt;
!requestable&lt;br /&gt;
!consumable&lt;br /&gt;
!default&lt;br /&gt;
!urgency&lt;br /&gt;
|-&lt;br /&gt;
|arch&lt;br /&gt;
|a&lt;br /&gt;
|RESTRING&lt;br /&gt;
|==&lt;br /&gt;
|YES&lt;br /&gt;
|NO&lt;br /&gt;
|NONE&lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|avx&lt;br /&gt;
|avx&lt;br /&gt;
|BOOL        &lt;br /&gt;
|==     &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|FALSE    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|calendar            &lt;br /&gt;
|c          &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cpu                 &lt;br /&gt;
|cpu        &lt;br /&gt;
|DOUBLE      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cpu_flags           &lt;br /&gt;
|c_f        &lt;br /&gt;
|STRING      &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cuda                &lt;br /&gt;
|cuda       &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|JOB        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|display_win_gui     &lt;br /&gt;
|dwg        &lt;br /&gt;
|BOOL        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|exclusive           &lt;br /&gt;
|excl       &lt;br /&gt;
|BOOL        &lt;br /&gt;
|EXCL    &lt;br /&gt;
|YES         &lt;br /&gt;
|YES        &lt;br /&gt;
|0        &lt;br /&gt;
|1000&lt;br /&gt;
|-&lt;br /&gt;
|h_core              &lt;br /&gt;
|h_core     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_cpu               &lt;br /&gt;
|h_cpu      &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_data              &lt;br /&gt;
|h_data     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_fsize             &lt;br /&gt;
|h_fsize    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_rss               &lt;br /&gt;
|h_rss      &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_rt                &lt;br /&gt;
|h_rt       &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|FORCED      &lt;br /&gt;
|NO        &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_stack             &lt;br /&gt;
|h_stack    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_vmem              &lt;br /&gt;
|h_vmem     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|hostname            &lt;br /&gt;
|h          &lt;br /&gt;
|HOST        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|infiniband          &lt;br /&gt;
|ib         &lt;br /&gt;
|BOOL        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|FALSE    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_core              &lt;br /&gt;
|core       &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_socket            &lt;br /&gt;
|socket     &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_thread            &lt;br /&gt;
|thread     &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_topology          &lt;br /&gt;
|topo       &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_topology_inuse    &lt;br /&gt;
|utopo      &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_free            &lt;br /&gt;
|mf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_total           &lt;br /&gt;
|mt         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_used            &lt;br /&gt;
|mu         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0       &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|memory              &lt;br /&gt;
|mem        &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|FORCED      &lt;br /&gt;
|YES        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|num_proc            &lt;br /&gt;
|p          &lt;br /&gt;
|INT         &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|qname               &lt;br /&gt;
|q          &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_core              &lt;br /&gt;
|s_core     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_cpu               &lt;br /&gt;
|s_cpu      &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_data              &lt;br /&gt;
|s_data     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_fsize             &lt;br /&gt;
|s_fsize    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_rss               &lt;br /&gt;
|s_rss      &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_rt                &lt;br /&gt;
|s_rt       &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_stack             &lt;br /&gt;
|s_stack    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_vmem              &lt;br /&gt;
|s_vmem     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|slots               &lt;br /&gt;
|s          &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|YES        &lt;br /&gt;
|1        &lt;br /&gt;
|1000&lt;br /&gt;
|-&lt;br /&gt;
|swap_free           &lt;br /&gt;
|sf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_rate           &lt;br /&gt;
|sr         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_rsvd           &lt;br /&gt;
|srsv       &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_total          &lt;br /&gt;
|st         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_used           &lt;br /&gt;
|su         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_free        &lt;br /&gt;
|vf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_total       &lt;br /&gt;
|vt         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_used        &lt;br /&gt;
|vu         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The good news is that most of these nobody ever uses. There are a couple of exceptions, though:&lt;br /&gt;
=== Infiniband ===&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. Infiniband does absolutely no good if running in a 'single' parallel environment. Infiniband is a high-speed host-to-host communication fabric. It is used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested Infiniband, and all the nodes with Infiniband were currently busy. In fact, some of our fastest nodes do not have Infiniband, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;-l ib=true&amp;lt;/tt&amp;gt; to your qsub command-line.&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. We have a very small number of nodes which have GPUs installed. To request one of these nodes, add &amp;lt;tt&amp;gt;-l cuda=true&amp;lt;/tt&amp;gt; to your qsub command-line.&lt;br /&gt;
=== Exclusive ===&lt;br /&gt;
Some programs just don't play nicely with others. They will attempt to use all available memory or will try to use all the cores it can use. The way to be a nice neighbor if your program has this problem is to request exclusive use of a node with &amp;lt;tt&amp;gt;-l excl=true&amp;lt;/tt&amp;gt;. This can also be useful for benchmarking, where you can be sure that no other jobs are interfering with yours.&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
Intranode jobs are easier to code and can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or Java's threads. Many times, your program will need to know how many cores you want it to use. Many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the qsub directive '&amp;lt;tt&amp;gt;-pe single ''n''&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $nslots to tell how many cores you've been allocated.&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
&amp;quot;Talking&amp;quot; between nodes is trickier that talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;-pe single ''n''&amp;lt;/tt&amp;gt;' for your qsub request, you will use one of the following:&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Parallel Environment !! Description&lt;br /&gt;
|-&lt;br /&gt;
|mpi-fill&lt;br /&gt;
|This environment will use as many slots on each node as it can until it reaches the number of cores you have requested.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-spread&lt;br /&gt;
|This environment will spread itself out over as many nodes as possible until it reaches the number of cores you have requested.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-1&lt;br /&gt;
|This environment will allocate the slots you've requested 1 per node.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-2&lt;br /&gt;
|This environment will allocate the slots you've requested 2 per node. You must request cores as a multiple of 2&lt;br /&gt;
|-&lt;br /&gt;
|mpi-4&lt;br /&gt;
|This environment will allocate the slots you've requested 4 per node. You must request cores as a multiple of 4&lt;br /&gt;
|-&lt;br /&gt;
|mpi-8&lt;br /&gt;
|This environment will allocate the slots you've requested 8 per node. You must request cores as a multiple of 8&lt;br /&gt;
|-&lt;br /&gt;
|mpi-10&lt;br /&gt;
|This environment will allocate the slots you've requested 10 per node. You must request cores as a multiple of 10&lt;br /&gt;
|-&lt;br /&gt;
|mpi-12&lt;br /&gt;
|This environment will allocate the slots you've requested 12 per node. You must request cores as a multiple of 12&lt;br /&gt;
|-&lt;br /&gt;
|mpi-16&lt;br /&gt;
|This environment will allocate the slots you've requested 16 per node. You must request cores as a multiple of 16&lt;br /&gt;
|-&lt;br /&gt;
|mpi-20&lt;br /&gt;
|This environment will allocate the slots you've requested 20 per node. You must request cores as a multiple of 20&lt;br /&gt;
|-&lt;br /&gt;
|mpi-80&lt;br /&gt;
|This environment will allocate the slots you've requested 80 per node. You must request cores as a multiple of 80&lt;br /&gt;
|}&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-4 16&amp;lt;/tt&amp;gt; will give you 4 chunks of 4 cores apiece. They might all happen to be allocated on the same node (16 cores), on 4 different nodes (4 cores each), on 3 nodes (8 cores on one and 4 cores on the other two), or on 2 nodes (8 cores each).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-fill 40&amp;lt;/tt&amp;gt; will give you 40 cores, but will attempt to get them all on the same node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-fill 100&amp;lt;/tt&amp;gt; will give you 100 cores, and place them on as few nodes as possible. In this case it's likely you would get a full mage (80 cores) and either part of another mage (the remaining 20 cores) or one of the 20-core elves.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-spread 40&amp;lt;/tt&amp;gt; will give you 40 cores, and will attempt to place each on a separate node.&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
All memory requests are '''per core'''. One of the more common scenarios is where somebody will need, say 20 cores and 400 GB of memory. So they will make a request like '&amp;lt;tt&amp;gt;-pe single 20, -l mem=400G&amp;lt;/tt&amp;gt;' This will never run, because what you are really requesting is 20 cores and 8000GB of memory (20 * 400). Since we have no nodes with 8000 terabytes of memory, the job will never run. In this case, you will divide the 400GB total memory request by the number of cores (20), so the correct command would be '&amp;lt;tt&amp;gt;-pe single 20, -l mem=20G&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
== Other Handy SGE Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have SGE email you when a job changes its status. This takes two directives to qsub: '&amp;lt;tt&amp;gt;-M ''someone@somewhere.com''&amp;lt;/tt&amp;gt;' will give the email address to which to send status updates. '&amp;lt;tt&amp;gt;-m abe&amp;lt;/tt&amp;gt;' is probably the most common directive given for ''when'' to send updates. This will send email messages when a job (a)borts, (b)egins, or (e)nds. Other possibilities are (s)uspended and (n)ever.&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-N ''JobName''&amp;lt;/tt&amp;gt;' qsub directive.&lt;br /&gt;
=== Combining Output Streams ===&lt;br /&gt;
Normally, SGE will create two files for output. One will be .e''jobnumber'' and the other .o''jobnumber''. If you want both of these to be combined into a single file, you can use the qsub directive '&amp;lt;tt&amp;gt;-j y&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== SGE Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that SGE makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
HOSTNAME=titan1.beocat&lt;br /&gt;
SGE_TASK_STEPSIZE=undefined&lt;br /&gt;
SGE_INFOTEXT_MAX_COLUMN=5000&lt;br /&gt;
SHELL=/usr/local/bin/sh&lt;br /&gt;
NHOSTS=2&lt;br /&gt;
SGE_O_WORKDIR=/homes/mozes&lt;br /&gt;
TMPDIR=/tmp/105.1.batch.q&lt;br /&gt;
SGE_O_HOME=/homes/mozes&lt;br /&gt;
SGE_ARCH=lx24-amd64&lt;br /&gt;
SGE_CELL=default&lt;br /&gt;
RESTARTED=0&lt;br /&gt;
ARC=lx24-amd64&lt;br /&gt;
USER=mozes&lt;br /&gt;
QUEUE=batch.q&lt;br /&gt;
PVM_ARCH=LINUX64&lt;br /&gt;
SGE_TASK_ID=undefined&lt;br /&gt;
SGE_BINARY_PATH=/opt/sge/bin/lx24-amd64&lt;br /&gt;
SGE_STDERR_PATH=/homes/mozes/sge_test.sub.e105&lt;br /&gt;
SGE_STDOUT_PATH=/homes/mozes/sge_test.sub.o105&lt;br /&gt;
SGE_ACCOUNT=sge&lt;br /&gt;
SGE_RSH_COMMAND=builtin&lt;br /&gt;
JOB_SCRIPT=/opt/sge/default/spool/titan1/job_scripts/105&lt;br /&gt;
JOB_NAME=sge_test.sub&lt;br /&gt;
SGE_NOMSG=1&lt;br /&gt;
SGE_ROOT=/opt/sge&lt;br /&gt;
REQNAME=sge_test.sub&lt;br /&gt;
SGE_JOB_SPOOL_DIR=/opt/sge/default/spool/titan1/active_jobs/105.1&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
PE_HOSTFILE=/opt/sge/default/spool/titan1/active_jobs/105.1/pe_hostfile&lt;br /&gt;
SGE_CWD_PATH=/homes/mozes&lt;br /&gt;
NQUEUES=2&lt;br /&gt;
SGE_O_LOGNAME=mozes&lt;br /&gt;
SGE_O_MAIL=/var/mail/mozes&lt;br /&gt;
TMP=/tmp/105.1.batch.q&lt;br /&gt;
JOB_ID=105&lt;br /&gt;
LOGNAME=mozes&lt;br /&gt;
PE=mpi-fill&lt;br /&gt;
SGE_TASK_FIRST=undefined&lt;br /&gt;
SGE_O_HOST=loki&lt;br /&gt;
SGE_O_SHELL=/bin/bash&lt;br /&gt;
SGE_CLUSTER_NAME=beocat&lt;br /&gt;
REQUEST=sge_test.sub&lt;br /&gt;
NSLOTS=32&lt;br /&gt;
SGE_STDIN_PATH=/dev/null&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a PE job. You would checkout the PE_HOSTFILE to know that. If your job has been restarted, it is nice to be able to change what happens rather than redoing all of your work. If this is the case, RESTARTED would equal 1. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $NSLOTS, $HOSTNAME, and $SGE_TASK_ID (used for array jobs, discussed below).&lt;br /&gt;
== Running from a qsub Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'qsub -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample qsub script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of qsub, lines beginning with #$ are commands for qsub itself, so&lt;br /&gt;
## I have taken the convention here of starting *every* line with a '#', just&lt;br /&gt;
## Delete the first one if you want to use that line, and then modify it to&lt;br /&gt;
## your own purposes. The only exception here is the first line, which *must*&lt;br /&gt;
## be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##$ -l mem=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime. Default is 1 hour (1:00:00)&lt;br /&gt;
##$ -l h_rt=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it. Default is &amp;quot;FALSE&amp;quot;&lt;br /&gt;
##$ ib=TRUE&lt;br /&gt;
&lt;br /&gt;
## CUDA directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
## Default is &amp;quot;FALSE&amp;quot;&lt;br /&gt;
##$ -l cuda=TRUE&lt;br /&gt;
&lt;br /&gt;
## Parallel environment. Syntax is '-pe Environment NumberOfCores' A list of&lt;br /&gt;
## valid environments can be found at&lt;br /&gt;
## https://support.beocat.ksu.edu/BeocatDocs/index.php/AdvancedSGE (section 2). One&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocat@cs.ksu.edu to see how we can assist in&lt;br /&gt;
## getting your job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
## &amp;quot;single 1&amp;quot;&lt;br /&gt;
##$ -pe single 12&lt;br /&gt;
##$ -pe mpi-1 2&lt;br /&gt;
##$ -pe mpi-fill 20&lt;br /&gt;
##$ -pe mpi-spread 16&lt;br /&gt;
&lt;br /&gt;
## Checkpointing. Options are BLCR or dmtcp. Default is no checkpointing.&lt;br /&gt;
##$ -ckpt dmtcp&lt;br /&gt;
&lt;br /&gt;
## Use the current working directory instead of your home directory&lt;br /&gt;
##$ -cwd&lt;br /&gt;
&lt;br /&gt;
## Merge output and error text streams into a single stream&lt;br /&gt;
##$ -j y&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##$ -N MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&lt;br /&gt;
## Send email when a job is aborted (a), begins (b), and/or ends (e)&lt;br /&gt;
##$ -m abe&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
##$ -M myemail@ksu.edu&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of SGE's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to qsub.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  -t n[-m[:s]]&lt;br /&gt;
     Submits  a  so  called  Array  Job,  i.e. an array of identical tasks being differentiated only by an index number and being treated by  Grid&lt;br /&gt;
     Engine almost like a series of jobs. The option argument to -t specifies the number of array job tasks and the index  number  which  will  be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SGE_TASK_ID. The option arguments&lt;br /&gt;
     n, m and s will be available through the environment variables SGE_TASK_FIRST, SGE_TASK_LAST and  SGE_TASK_STEPSIZE.&lt;br /&gt;
 &lt;br /&gt;
     Following restrictions apply to the values n and m:&lt;br /&gt;
 &lt;br /&gt;
            1 &amp;lt;= n &amp;lt;= 1,000,000&lt;br /&gt;
            1 &amp;lt;= m &amp;lt;= 1,000,000&lt;br /&gt;
            n &amp;lt;= m&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or  a  range  with  a  step  size.&lt;br /&gt;
     Hence,  the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SGE_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array  jobs  are  commonly  used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks will be written into different files with the default location&lt;br /&gt;
 &lt;br /&gt;
     &amp;lt;jobname&amp;gt;.['e'|'o']&amp;lt;job_id&amp;gt;'.'&amp;lt;task_id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#$ -t 50:200:50&lt;br /&gt;
RUNSIZE=$SGE_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and SGE understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     qsub ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as qsub has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#$ -t 1:5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SGE_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SGE_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so qsub doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'qrsh'. qrsh uses the exact same command-line arguments as qsub. If no node is available with your resource requirements, qrsh will tell you&lt;br /&gt;
 Your &amp;quot;qrsh&amp;quot; request could not be scheduled, try again later.&lt;br /&gt;
Note that, like qsub, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
== Altering Job Requests ==&lt;br /&gt;
We generally do not support users to modify job parameters once the job has been submitted. It can be done, but there are numerous catches, and all of the variations can be a bit problematic; it is normally easier to simply delete the job and resubmit it with the right parameters. '''If your job doesn't start after modifying such parameters (after a reasonable amount of time), delete the job and resubmit it.'''&lt;br /&gt;
=== qalter ===&lt;br /&gt;
&amp;lt;tt&amp;gt;qalter&amp;lt;/tt&amp;gt; is the command that can be used to modify parameters of the job after it has been submitted. '''Note: resource requests (memory, runtime, et. al.) can only be modified on jobs that have yet to start running.'''&lt;br /&gt;
==== Changing resource requests ====&lt;br /&gt;
Syntax:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
qalter -l $all_resources $jobid&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
When modifying resource requests, you '''must''' specify all of the resources your job needs, not just the one you plan to change. If you just specify h_rt, it will drop the memory request. If you just specify memory, it will drop the h_rt. And so on. This leads to jobs failing to start.&lt;br /&gt;
==== Changing core requests ====&lt;br /&gt;
Syntax:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
qalter -pe $pe_name $number_of_cores $jobid&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
If you request more cores than are available in the parallel environment that you need, the job may fail to start.&lt;br /&gt;
: i.e. requesting 400 cores in the single environment will fail due to the fact that we have no machines with 400 cores.&lt;br /&gt;
==== Determining why a job is not running ====&lt;br /&gt;
Syntax:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
qalter -w v $jobid&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will output the scheduler's reasoning as to why the job has not started. Note that lines like:&lt;br /&gt;
 Job 1122334455 cannot run in PE &amp;quot;single&amp;quot; because it only offers 0 slots&lt;br /&gt;
Are usually red herrings. Sometimes they are indicative that the scheduler cannot meet the resources requests for that job at this moment in time.&lt;br /&gt;
&lt;br /&gt;
Sometimes you will see output like this:&lt;br /&gt;
 Job 1122334455 does not request 'forced' resource &amp;quot;memory&amp;quot; of queue instance batch.q@elf73.beocat&lt;br /&gt;
In this case the user performed a qalter and forgot to specify the memory request. The job will never run in this state.&lt;br /&gt;
&lt;br /&gt;
Other times it will have lots of lines like this:&lt;br /&gt;
 verification: found possible assignment with 1 slots&lt;br /&gt;
This indicates that the job should be scheduled shortly.&lt;br /&gt;
== Killable jobs ==&lt;br /&gt;
There are a growing number of machines within Beocat that are owned by a particular person or group. Normally jobs from users that aren't in the group designated by the owner of these machines cannot use them. This is because we have guaranteed that the nodes will be accessible and available to the owner at any given time. We will allow others to use these nodes if they designate their job as &amp;quot;killable.&amp;quot; If your job is designated as killable, your job will be able to use these nodes, but can (and will) be killed off at any point in time to make way for the designated owner's jobs. Jobs that are marked killable will be re-queued and may restart on another node.&lt;br /&gt;
&lt;br /&gt;
The way you would designate your job as killable is to add &amp;lt;tt&amp;gt;-l killable&amp;lt;/tt&amp;gt; to the '''&amp;lt;tt&amp;gt;qsub&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;qrsh&amp;lt;/tt&amp;gt;''' arguments. This could be either on the command-line or in your script file.&lt;br /&gt;
&lt;br /&gt;
''Note: This is a submit-time only request, it cannot be added by a normal user after the job has been submitted.'' If you would like jobs modified to be '''killable''' after the jobs have been submitted (and it is too much work to &amp;lt;tt&amp;gt;qdel&amp;lt;/tt&amp;gt; the jobs and re-submit), send an e-mail to the administrators detailing the job ids and what you would like done.&lt;br /&gt;
&lt;br /&gt;
== Scheduling Priority ==&lt;br /&gt;
The scheduler uses a complex formula to determine the order that jobs get scheduled in.  Jobs in general get run in the order that they are submitted to the queue with the following exceptions.  Jobs for users in a group that owns nodes will immediately get scheduled on those nodes even if that means bumping existing jobs off.  Users in groups that have contributed funds to Beocat may have higher scheduling priority.  You can check the base scheduling priority of each group using &amp;lt;tt&amp;gt;qconf -sst&amp;lt;/tt&amp;gt;.   If you do not have a group your jobs are scheduled using BEODEFAULT.  The higher the priority, the faster your job will be moved to the front of the queue.  A fair scheduling algorithm adjusts this scheduling priority down as users in that group submit more jobs.  &lt;br /&gt;
&lt;br /&gt;
Since all users not in a group having higher priority get put into BEODEFAULT, the priority is always very low and each job gets scheduled in the order it was submitted.  Groups with a higher priority may jump ahead of the BEODEFAULT jobs, but if these groups are submitting lots of jobs their priority will become low as well.  Groups with the highest priority that are submitting the fewest jobs may see those jobs moved to the front of the queue quickly.&lt;br /&gt;
&lt;br /&gt;
When processing cores become available, the scheduler looks at the head of the queue to find jobs that will fit within the resources available.  Shorter jobs of 12 hours or less get marked as killable and will be run on nodes owned by other groups.  These jobs will jump past longer jobs when resources become available on owned nodes.  Many jobs in the queue may require more memory than is available on some nodes, so smaller memory jobs will be scheduled ahead of larger memory jobs on hosts with more limited memory.  &amp;lt;tt&amp;gt;kstat -q&amp;lt;/tt&amp;gt; will show you the order in the queue and allow you to see jobs marked as &amp;quot;killable&amp;quot; and those that require large memory.&lt;br /&gt;
&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The qacct tool will read SGE's accounting file and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== qacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
qacct -j 1122334455&lt;br /&gt;
# if you don't know the job id, you can look at your jobs over some number of days in this case the past 14 days:&lt;br /&gt;
qacct -o $USER -d 14 -j&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
 &amp;lt;tt&amp;gt;qname        batch.q             &lt;br /&gt;
 hostname     mage07.beocat       &lt;br /&gt;
 group        some_user_users        &lt;br /&gt;
 owner        some_user              &lt;br /&gt;
 project      BEODEFAULT          &lt;br /&gt;
 department   defaultdepartment   &lt;br /&gt;
 jobname      my_job_script.sh  &lt;br /&gt;
 jobnumber    1122334455          &lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 exit_status  1                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;ru_wallclock 1s&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;ru_utime     0.030s&lt;br /&gt;
 ru_stime     0.030s&lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 arid         undefined&lt;br /&gt;
 category     -u some_user -q batch.q,long.q -l h_rt=604800,mem_free=1024.0M,memory=2G&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you look at the line showing ru_wallclock. You can see that it shows 1s. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
 &amp;lt;tt&amp;gt;qname        batch.q             &lt;br /&gt;
 hostname     scout59.beocat      &lt;br /&gt;
 group        some_user_users     &lt;br /&gt;
 owner        some_user           &lt;br /&gt;
 project      BEODEFAULT          &lt;br /&gt;
 department   defaultdepartment   &lt;br /&gt;
 jobname      my_job_script.sh           &lt;br /&gt;
 jobnumber    1122334455            &lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...            &lt;br /&gt;
 slots        1                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;failed       37  : qmaster enforced h_rt, h_cpu, or h_vmem limit&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;exit_status  0                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;ru_wallclock 21600s&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;ru_utime     0.130s&lt;br /&gt;
 ru_stime     0.020s&lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 arid         undefined&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;category     -u some_user -q batch.q,long.q -l h_rt=21600,mem_free=512.0M,memory=1G&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you look at the lines showing failed, ru_wallclock and category we can see some pointers to the issue.&lt;br /&gt;
It didn't finish because the scheduler (qmaster) enforced some limit. If you look at the category line, the only limit requested was h_rt. So it was a runtime (wallclock) limit.&lt;br /&gt;
Comparing ru_wallclock and the h_rt request, we can see that it ran until the h_rt time was hit, and then the scheduler enforce the limit and killed the job. You will need to resubmit the job and ask for more time next time.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=199</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=199"/>
		<updated>2016-12-01T19:49:05Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Remove section on hostkey failed - no longer relevant&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | headnode.beocat.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /bulk || Shared || 2.1PB shared with /bulk and /scratch || cephfs || Slower than /homes; very old files are culled automatically&lt;br /&gt;
|-&lt;br /&gt;
| /homes || Shared || 2.1PB shared with /bulk and /scratch || cephfs || Good enough for most jobs; limited to 1TB per home directory&lt;br /&gt;
|-&lt;br /&gt;
| /scratch || Shared || 2.1PB shared with /bulk and /homes || cephfs || Fast shared tmp space; files not used in 30 days are automatically culled&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || ext4 || Good for I/O intensive jobs&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SGE_JOBID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable&amp;quot; or &amp;quot;nokillable&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (-l killable) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=12:00:00). Some users still feel this is a hinderance, so we created another flag to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;nokillable&amp;quot; resource ===&lt;br /&gt;
Nokillable (-l nokillable) simply makes it so that the jsv will not automatically mark the job killable. If you mark both killable and nokillable, killable will win.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SGEBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run qsub for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;-l h_rt=10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In fact, even the administrators cannot change the run-time requirement of a particular job. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Perl doesn't like getting called straight from the scheduler. However, there is a fairly easy workaround. Create a shell wrapper script that calls perl and its program.&lt;br /&gt;
&lt;br /&gt;
For instance, I can create a script called runperl.sh that looks like this:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 #$ -l h_rt=1:00:00,mem=1G&lt;br /&gt;
 /usr/bin/perl /path/to/my/perl_program.pl&lt;br /&gt;
&lt;br /&gt;
Make this wrapper program executable:&lt;br /&gt;
 chmod 755 runperl.sh&lt;br /&gt;
&lt;br /&gt;
Then submit it with&lt;br /&gt;
 qsub runperl.sh&lt;br /&gt;
&lt;br /&gt;
Of course, the name of this script isn't important, as long as you change the corresponding chmod and qsub commands.&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;-l infinband=TRUE&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens to my data when I leave K-State? ==&lt;br /&gt;
First of all, although we use eid credentials, we are not tied in with K-State's central IT policies which apply to employees or students leaving the university. As long as you keep your eid password current, you still have access to Beocat. Once we deem your data to be &amp;quot;stale&amp;quot;, we will archive your data and disable your account. We have no written policy on when we do this, because we only do so as necessity dictates, but generally speaking if you have any data which is modified for less than two years will not be marked as stale. If your account is disabled for this reason, you will have to apply for a new account and un-archive your data.&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area. If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change the group to the name assigned by the Beocat admins&lt;br /&gt;
** &amp;lt;tt&amp;gt;chgrp -R $group_name $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the directory writeable and sticky for the group&lt;br /&gt;
** &amp;lt;tt&amp;gt;chmod g+ws $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Change your umask to 002 (there will probably be a setting for it in your file transfer utilities, also). This step needs to be done by all group members.&lt;br /&gt;
** &amp;lt;tt&amp;gt;umask 002&amp;lt;/tt&amp;gt; needs to go above &amp;lt;tt&amp;gt;&amp;lt;nowiki&amp;gt;if [[ $- != *i* ]] ; then&amp;lt;/nowiki&amp;gt;&amp;lt;/tt&amp;gt; in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file&lt;br /&gt;
&lt;br /&gt;
* Finally logout and log back in&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man qsub&amp;lt;/code&amp;gt;'. This will bring up the manual for qsub.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=198</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=198"/>
		<updated>2016-12-01T19:45:09Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Change CIS references to CS&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|HPC]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the [http://www.cs.ksu.edu/ Computer Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of Linux servers coordinated by the [https://arc.liv.ac.uk/trac/SGE SGE] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.ksu.edu/ http://ganglia.beocat.ksu.edu/].&lt;br /&gt;
* A comparatively small [[Hadoop]] cluster&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.ksu.edu/ https://account.beocat.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''beocat#beocat''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to headnode.beocat.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use SGE for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SGEBasics]] page for an introduction on how to submit your first job. If you are already familiar with SGE, we also have an [[AdvancedSGE]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cs.ksu.edu beocat@cs.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [http://freenode.net/using_the_network.shtml freenode chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' and/or '''kylehutson''' in your message, and it should grab our attention. Available from a web browser [[Special:WebChat|here.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat.&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar&lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com&lt;br /&gt;
|color=711616&lt;br /&gt;
|view=AGENDA&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=197</id>
		<title>LinuxBasics</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=LinuxBasics&amp;diff=197"/>
		<updated>2016-12-01T19:42:09Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Change beocat.cis.ksu.edu to headnode.beocat.ksu.edu&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Disclaimer:''' This is a ''very'' large topic, and much too broad to be covered on a single support page. There are many other sites (yes, entire sites) which cover the topic in more detail. We'll link so some of them below. This page is meant to be just the essentials.&lt;br /&gt;
&lt;br /&gt;
== Logging in for the first time ==&lt;br /&gt;
To login to Beocat, you first need an &amp;quot;SSH Client&amp;quot;. [[wikipedia:Secure_Shell|SSH]] (short for &amp;quot;secure shell&amp;quot;) is a protocol that allows secure communication between two computers. We recommend the following.&lt;br /&gt;
* Windows&lt;br /&gt;
** [http://www.chiark.greenend.org.uk/~sgtatham/putty/ PuTTY] is by far the most common SSH client, both for Beocat and in the world.&lt;br /&gt;
** [http://mobaxterm.mobatek.net/ MobaXterm] is a fairly new client with some nice features, such as being able to SCP/SFTP (see below), and running X (which isn't terribly useful on Beocat, but might be if you connect to other Linux hosts).&lt;br /&gt;
** [http://www.cygwin.com/ Cygwin] is for those that would rather be running Linux but are stuck on Windows. It's purely a text interface.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** OS-X has SSH a built-in application called &amp;quot;Terminal&amp;quot;. It's not great, but it will work for most Beocat users.&lt;br /&gt;
** [http://www.iterm2.com/#/section/home iTerm2] is the terminal application we prefer.&lt;br /&gt;
* Others&lt;br /&gt;
** There are [[wikipedia:Comparison_of_SSH_clients|many SSH clients]] for many different platforms available. While we don't have experience with many of these, any should be sufficient for access to Beocat.&lt;br /&gt;
&lt;br /&gt;
You'll need to connect your client (via the SSH protocol, if your client allows multiple protocols) to headnode.beocat.ksu.edu.&lt;br /&gt;
&lt;br /&gt;
For command-line tools, the command to connect is&lt;br /&gt;
 ssh ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
&lt;br /&gt;
Your username is your [http://eid.ksu.edu K-State eID] name and the password is your eID password.&lt;br /&gt;
&lt;br /&gt;
'''Note:''' When you type your password, nothing shows up on the screen, not even asterisks.&lt;br /&gt;
&lt;br /&gt;
You'll know you are successfully logged in when you see a prompt that says&lt;br /&gt;
 (''machinename'':~) ''username''%&lt;br /&gt;
where ''machinename'' is the name of the machine you've logged into (currently either 'athena' or 'minerva') and ''username'' is your eID username&lt;br /&gt;
&lt;br /&gt;
== Transferring files (SCP or SFTP) ==&lt;br /&gt;
Usually, one of the first things people want to do is to transfer files into or out of Beocat. To do so, you need to use [[wikipedia:Secure_copy|SCP]] (secure copy) or [[wikipedia:SSH_File_Transfer_Protocol|SFTP]] (SSH FTP or Secure FTP). Again, there are multiple programs that do this.&lt;br /&gt;
* Windows&lt;br /&gt;
** Putty (see above) has PSCP and PSFTP programs (both are included if you run the installer). It is a command-line interface (CLI) rather than a graphical user interface (GUI).&lt;br /&gt;
** MobaXterm (see above) has a built-in GUI SFTP client that automatically changes the directories as you change them in your SSH session.&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] (client) has an easy-to-use GUI. Be sure to use 'SFTP' mode rather than 'FTP' mode.&lt;br /&gt;
** [http://winscp.net/eng/index.php WinSCP] is another easy-to-use GUI.&lt;br /&gt;
** Cygwin (see above) has CLI scp and sftp programs.&lt;br /&gt;
* Macintosh&lt;br /&gt;
** [https://filezilla-project.org/ FileZilla] is also available for OS-X.&lt;br /&gt;
** Within terminal or iTerm, you can use the 'scp' or 'sftp' programs.&lt;br /&gt;
* Linux&lt;br /&gt;
** FileZilla also has a GUI linux version, in additon to the CLI tools.&lt;br /&gt;
&lt;br /&gt;
=== Using a Command-Line Interface (CLI) ===&lt;br /&gt;
You can safely ignore this section if you're using a graphical interface (GUI). We highly recommend using a GUI when first learning how to use Beocat.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;First test case&amp;lt;/u&amp;gt;: transfer a file called myfile.txt in your current folder to your home directory on Beocat. For these examples, I use bold text to show what you type and plain text to show Beocat's response&lt;br /&gt;
&lt;br /&gt;
Using SCP:&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Note the colon at the end of the 'scp' line.&lt;br /&gt;
&lt;br /&gt;
Using SFTP&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/kylehutson/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
SFTP is interactive, so this is a two-step process. First, you connect to Beocat, then you transfer the file. As long as the system gives the &amp;lt;code&amp;gt;sftp&amp;gt; &amp;lt;/code&amp;gt; prompt, you are in the sftp program, and you will remain there until you type 'exit'.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Second test case:&amp;lt;/u&amp;gt; transfer a file called myfile.txt in your current folder to a diretory named 'mydirectory' under your home directory on Beocat.&lt;br /&gt;
&lt;br /&gt;
Here we run into one of the problems with scp - there is no easy way of creating 'mydirectory' if it doesn't already exist. If it does not already exist, you must login via ssh (as seen above) and create the directory using the 'mkdir' command (see Common Linux Commands) below.&lt;br /&gt;
&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 &lt;br /&gt;
An alternative version. If the colon is immediately followed by a slash, the directory name is taken from the root, rather than your home directory. So, given that your home directory on Beocat is /homes/''username'', we could instead type&lt;br /&gt;
 '''scp myfile.txt ''username''@headnode.beocat.ksu.edu:/homes/''username''/mydirectory'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 sftp ''username''@headnode.beocat.ksu.edu&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected to headnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''mkdir mydirectory'''&lt;br /&gt;
 [Note, if this directory already exists, you will get the response &amp;quot;Couldn't create directory: Failure&amp;quot;]&lt;br /&gt;
 sftp&amp;gt; '''cd mydirectory'''&lt;br /&gt;
 sftp&amp;gt; '''put myfile.txt'''&lt;br /&gt;
 Uploading myfile.txt to /homes/''username''/mydirectory/myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''quit'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Third test case:&amp;lt;/u&amp;gt; copy myfile.txt from your home directory on Beocat to your current folder.&lt;br /&gt;
&lt;br /&gt;
Using scp:&lt;br /&gt;
 scp ''username''@headnode.beocat.ksu.edu:myfile.txt .&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
&lt;br /&gt;
Using SFTP:&lt;br /&gt;
 '''sftp ''username''@headnode.beocat.ksu.edu'''&lt;br /&gt;
 Password: '''(type your password here, it will not show any response on the screen)'''&lt;br /&gt;
 Connected toheadnode.beocat.ksu.edu.&lt;br /&gt;
 sftp&amp;gt; '''get myfile.txt'''&lt;br /&gt;
 Fetching /homes/''username''/myfile.txt to myfile.txt&lt;br /&gt;
 myfile.txt                                                                            100%    0     0.0KB/s   00:00&lt;br /&gt;
 sftp&amp;gt; '''exit'''&lt;br /&gt;
&lt;br /&gt;
== Basic Linux Commands ==&lt;br /&gt;
Again, this guide is very limited, mostly limited to directory navigation and basic file commands. [http://www.ee.surrey.ac.uk/Teaching/Unix/ Here] is a pretty decent tutorial if you want to dig deeper. If you want more, entire books have been written on the subject.&lt;br /&gt;
&lt;br /&gt;
=== The Lingo ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!''Term''&lt;br /&gt;
!''Definition''&lt;br /&gt;
|-&lt;br /&gt;
|Directory&lt;br /&gt;
|A &amp;quot;Folder&amp;quot; in Windows or OS-X terms. A location where files or other directories are stored. The current directory is sometimes represented as `.` and the parent directory can be referenced as `..`&lt;br /&gt;
|-&lt;br /&gt;
|Shell&lt;br /&gt;
|The interface or environment under which you can run commands. There is a section below on shells&lt;br /&gt;
|-&lt;br /&gt;
|SSH&lt;br /&gt;
|Secure Shell. A protocol that encrypts data and can give access to another system, usually by a username and password&lt;br /&gt;
|-&lt;br /&gt;
|SCP&lt;br /&gt;
|Secure Copy. Copying to or from a remote system using part of SSH&lt;br /&gt;
|-&lt;br /&gt;
|path&lt;br /&gt;
|The list of directories which are searched when you type the name of a program. There is a section below on this&lt;br /&gt;
|-&lt;br /&gt;
|ownership&lt;br /&gt;
|Every file and directory has an user and a group attached to it, called its owners. These affect permissions.&lt;br /&gt;
|-&lt;br /&gt;
|permissions&lt;br /&gt;
|The ability to read, write, and/or execute a file. Permissions are based on ownership&lt;br /&gt;
|-&lt;br /&gt;
|switches&lt;br /&gt;
|Modifiers to a command-line program, usually in the form of -(letter) or --``(word). Several examples are given below, such as the '-a' on the 'ls' command&lt;br /&gt;
|-&lt;br /&gt;
|pipes and redirects&lt;br /&gt;
|Changes the input (often called 'stdin') and/or output (often called stdout) to a program or a file&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Linux Command Line Cheat Sheet ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+File System Navigation&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
!''Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|pwd&lt;br /&gt;
|&amp;quot;Print working directory&amp;quot;, Where am I now?&lt;br /&gt;
|&amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;/homes/mozes&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls&lt;br /&gt;
|Lists files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -lh&lt;br /&gt;
|Lists files and folders with perms size and ownership&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -lh ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;-rw-r--r--  1 mozes    mozes_users   1    Jul 13  2011 NewFile&lt;br /&gt;
drwxr-xr-x  9 mozes    mozes_users   9.0K Apr 12  2010 NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|ls -a&lt;br /&gt;
|Lists all files and folders&lt;br /&gt;
|&amp;lt;code&amp;gt;ls -a ~/&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;. .. .bashrc .bash_profile .tcshrc NewFile NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cd&lt;br /&gt;
|Changes directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ..&lt;br /&gt;
|Changes to parent directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ..&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd -&lt;br /&gt;
|Changes to the previous directory you were in&lt;br /&gt;
|&amp;lt;code&amp;gt;cd -&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cd ~&lt;br /&gt;
|Changes to your home directory&lt;br /&gt;
|&amp;lt;code&amp;gt;cd ~&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Working with files&lt;br /&gt;
|-&lt;br /&gt;
!Command'&lt;br /&gt;
!What it does&lt;br /&gt;
!Example Usage'&lt;br /&gt;
!Example Output''&lt;br /&gt;
|-&lt;br /&gt;
|file&lt;br /&gt;
|Identifies the type of object a file is&lt;br /&gt;
|&amp;lt;code&amp;gt;file NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;NewFile: a /usr/bin/python script, ASCII text executable&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cat&lt;br /&gt;
|Prints the contents of one or more files&lt;br /&gt;
|&amp;lt;code&amp;gt;cat NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;This is line one&lt;br /&gt;
This is line two&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp&lt;br /&gt;
|copy a file&lt;br /&gt;
|&amp;lt;code&amp;gt;cp OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|cp -i&lt;br /&gt;
|copy a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -i OldFile NewFile}&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|cp -r&lt;br /&gt;
|copy a directory, including contents&lt;br /&gt;
|&amp;lt;code&amp;gt;cp -r OldFolder NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv&lt;br /&gt;
|move, or rename, a file&lt;br /&gt;
|&amp;lt;code&amp;gt;mv OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mv -i&lt;br /&gt;
|move, or rename, a file, ask to overwrite&lt;br /&gt;
|&amp;lt;code&amp;gt;mv -i OldFile NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;overwrite NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm&lt;br /&gt;
|remove a file&lt;br /&gt;
|&amp;lt;code&amp;gt;rm NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rm -i&lt;br /&gt;
|remove a file, ask to be sure (useful with -r)&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -i NewFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;remove NewFile? (y/n [n])&amp;lt;/code&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|rm -r&lt;br /&gt;
|remove a direcory and its contents&lt;br /&gt;
|&amp;lt;code&amp;gt;rm -r NewFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|mkdir&lt;br /&gt;
|creates a directory&lt;br /&gt;
|&amp;lt;code&amp;gt;mkdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|rmdir&lt;br /&gt;
|removes an empty directory&lt;br /&gt;
|&amp;lt;code&amp;gt;rmdir TempFolder&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|touch&lt;br /&gt;
|creates an empty file&lt;br /&gt;
|&amp;lt;code&amp;gt;touch TempFile&amp;lt;/code&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|+Finding files and directories with [http://linux.die.net/man/1/find find]&lt;br /&gt;
|-&lt;br /&gt;
!''Command''&lt;br /&gt;
!''What it does''&lt;br /&gt;
!''Example Usage''&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt;&lt;br /&gt;
| finds all files and folders within &amp;lt;directory&amp;gt;&lt;br /&gt;
| find ~/&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '&amp;lt;filename&amp;gt;'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that match &amp;lt;filename&amp;gt;&lt;br /&gt;
| find ~/ -iname 'hello.qsub'&lt;br /&gt;
|-&lt;br /&gt;
| find &amp;lt;directory&amp;gt; -iname '*&amp;lt;partialmatch&amp;gt;*'&lt;br /&gt;
| finds all files and directories within &amp;lt;directory&amp;gt; that partially match &amp;lt;partialmatch&amp;gt;&lt;br /&gt;
| find ~/ -iname '*.qsub*'&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Other useful commands include &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt;. &amp;lt;code&amp;gt;man&amp;lt;/code&amp;gt; followed by a command name above will give you the manual page for the specified command full of many other useful options for the command. &amp;lt;code&amp;gt;htop&amp;lt;/code&amp;gt; will give you an overview of the processes currently being run on the host you are connected to. &amp;lt;code&amp;gt;less&amp;lt;/code&amp;gt; allows you to page through files and see their contents using &amp;lt;PgUp&amp;gt; and &amp;lt;PgDn&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Editing Text Files ===&lt;br /&gt;
If you're new to Linux, the editor you will probably want to use is 'nano'. It works much the same as 'Notepad' in Windows or 'textedit' on OS-X. Note that you cannot use your mouse to change position within the document as you can with your local computer. You must use the arrow keys, instead.&lt;br /&gt;
&lt;br /&gt;
So, if I wanted to edit my .bashrc (as shown below), and I was already in my home directory (see above), I would type&lt;br /&gt;
 nano .bashrc&lt;br /&gt;
&lt;br /&gt;
While in nano, there is a list of actions you can take at the bottom of the screen. &amp;lt;Ctrl&amp;gt; is represented by a caret (`^`), so to exit (labeled as `^`X at the bottom of the screen), I would type &amp;lt;ctrl&amp;gt;-x. This action prompts you whether you want to save and exit (Y), lose changes and exit (N), or cancel and go back to editing (&amp;lt;ctrl&amp;gt;-c).&lt;br /&gt;
&lt;br /&gt;
If you do a significant amount of text editing in Linux, you'll probably want to switch to a more powerful editor, such as vim. The usage of vim is beyond the scope of this document. It is not at all intuitive to the beginning user, but with a little practice it becomes a much faster way of editing text files. If you're interested in using vim, [http://www.openvim.com/tutorial.html|there is a nice tutorial here].&lt;br /&gt;
&lt;br /&gt;
=== Shells ===&lt;br /&gt;
==== What is a Shell? ====&lt;br /&gt;
In this case, I don't believe I can do a better job explaining shells than [[wikipedia:Shell_(computing)|this]].&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
Elsewhere at Kansas State University, the default Shell is set to tcsh. tcsh stands for &amp;quot;TENEX C SHell.&amp;quot; It is considered a replacement for csh and uses many of the same features. If you have experience with either csh or tcsh you'll probably feel right at home. This was the default shell until July of 2013. If you had an account before then, it is probably still tcsh.&lt;br /&gt;
&lt;br /&gt;
But what if you don't want or like tcsh, what can you do? Well, we have other shells available of Beocat as well.&lt;br /&gt;
==== bash ====&lt;br /&gt;
[http://www.gnu.org/software/bash/ Bash] seems to be the defacto standard shell in most Linux installs today. Bash is common and probably what most of you are used to. As of July 2013, bash is our new default shell. All new users will be set to bash initially.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;bash configuration files:&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This section gets into some minutiae with the way our job scheduler interacts with bash. If you're trying to solve a problem, read on, otherwise you can probably skip this section.&lt;br /&gt;
&lt;br /&gt;
Bash has 3 user configurable configuration files, &amp;lt;code&amp;gt;~/.bashrc ~/.bash_profile&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_logout&amp;lt;/code&amp;gt;. We'll look at the two more relevant ones &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Bash normally has 3 ways of looking at things, '''login''', '''interactive''', or '''none'''.&lt;br /&gt;
&lt;br /&gt;
Normally what happens is that shells that are '''interactive''' read &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;, shells that are '''login''' read &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;. '''none''' shells read neither.&lt;br /&gt;
&lt;br /&gt;
In Beocat the flow is a little more sane because of our default &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt;:&lt;br /&gt;
We setup a 4th category of shell, '''login+interactive'''. '''login''' and '''login+interactive''' shells read in ''both'' your &amp;lt;code&amp;gt;~/.bash_profile&amp;lt;/code&amp;gt; and your &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt;, the difference being that the plain '''login''' shell stops reading &amp;lt;code&amp;gt;~/.bashrc&amp;lt;/code&amp;gt; at the following statement:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
if [[ $- != *i* ]] ; then&lt;br /&gt;
        # Shell is non-interactive.  Be done now!&lt;br /&gt;
        return&lt;br /&gt;
fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
qsub jobs are '''login''', qrsh jobs are '''login+interactive''', logging into Beocat in a way that you can enter commands is '''login+interactive'''. There are very few cases that you will get '''none'''. For any session that isn't '''interactive''', your sourced files cannot output anything to the screen, or else it can break scp or sftp file transfers.&lt;br /&gt;
&lt;br /&gt;
If they are ''quiet'' statements, and you want them in all shells, you can put them ''before'' the aforementioned if statement. If they are not ''quiet'' or they output ''anything'' to the screen, you must put them ''after'' the aforementioned if statement.&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
[http://zsh.sourceforge.net/ zsh] is an alternative to bash and tcsh. It tends to support more complex features than either of the other two while using a syntax remarkably similar to bash. Unless specifically noted, when we specify '''Change your shell to bash''', &amp;lt;tt&amp;gt;zsh&amp;lt;/tt&amp;gt; should work as well.&lt;br /&gt;
&lt;br /&gt;
==== Changing Shells ====&lt;br /&gt;
Previously, we gave you the option of using a &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; to modify your shell. This is no longer supported, if you have issues with your shell/paths/environment variables we will ask you to delete your &amp;lt;code&amp;gt;~/.login&amp;lt;/code&amp;gt; file and change your shell via the method below.&lt;br /&gt;
&lt;br /&gt;
You can change your shell is via &amp;lt;code&amp;gt;chsh&amp;lt;/code&amp;gt; on either of the headnodes (athena/minerva). This does not need to be re-done if you've changed to it to your preferred shell in the past.&lt;br /&gt;
&lt;br /&gt;
Use the appropriate of the following three lines:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
/usr/local/bin/chsh -s bash &amp;amp;&amp;amp; bash -l&lt;br /&gt;
/usr/local/bin/chsh -s tcsh &amp;amp;&amp;amp; tcsh -l&lt;br /&gt;
/usr/local/bin/chsh -s zsh &amp;amp;&amp;amp; zsh -l&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Changing your PATH ===&lt;br /&gt;
Typically, you don't have to change your PATH, but it is useful to know what your PATH is and what it does. The PATH is the list of directories which are searched when you type the name of a program. Note that by default the current directory is NOT included in the path, so if you were wanting to run a program called MyProgram in the current directory, you could NOT simply type 'MyProgram', you would instead type &amp;lt;code&amp;gt;'./MyProgram'&amp;lt;/code&amp;gt; (where the '.' represents the current directory).&lt;br /&gt;
&lt;br /&gt;
To find your PATH, we need to identify which shell you are using. If you do not know, run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
ps | awk '/sh/ {print $4}'&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== tcsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .tcshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
setenv PATH /usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== bash ====&lt;br /&gt;
You'll need to edit a file in your home directory called .bashrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=/usr/local/bin:$PATH&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== zsh ====&lt;br /&gt;
You'll need to edit a file in your home directory called .zshrc, replacing /usr/local/bin with the directory that you want added to your PATH using a text editor as shown above.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
export PATH=&amp;quot;/usr/local/bin:$PATH&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Ownership and Permissions ===&lt;br /&gt;
Every file and directory has a user and group associated with it. You can view ownership information by using the '-l' switch on ls. By default on Beocat, files you create have a user ownership of your username (i.e., your eID) and a group ownership of your username_users. So, if I were logged in as 'myusername' and I had a single file in my home directory called MyProgram, the result of typing 'ls -l' would be something like this:&lt;br /&gt;
 total 0&lt;br /&gt;
 -rwxr-x--- 1 myusername myusername_users 79 May 31  2011 MyProgram&lt;br /&gt;
This tells us several things.&lt;br /&gt;
* The first column ('-rwxr-x---') is permissions (covered below)&lt;br /&gt;
* The second column ('1') is the number of links to this file. You can safely ignore this (unless you're both masochistic and interested in filesystem details)&lt;br /&gt;
* The third column ('myusername') shows the user ownership&lt;br /&gt;
* The fourth column ('myusername_users') shows the group ownership&lt;br /&gt;
* The fifth column ('79') gives the size of the file in bytes&lt;br /&gt;
* The next columns ('May 31  2011'), as you have probably guessed, gives the date the file was last changed&lt;br /&gt;
* The final column ('MyProgram') is the name of the file&lt;br /&gt;
&lt;br /&gt;
So why is this interesting to us? Because whenever things ''don't'' work, it's usually because of file ownership or permissions. Looking at these often gives us some useful diagnostic information.&lt;br /&gt;
&lt;br /&gt;
The permissions field shows us who has permissions to do what with this file. It is always 10 characters. The first character (-) is usually either a '-' for a regular file or a 'd' for a directory. The next 9 characters are broken into three groups of three, with each group showing read (r), write (w), and execute (x) permissions for the owner, group, and world, in that order.&lt;br /&gt;
* The first group (rwx) shows permissions for the owner (myusername). The owner here has read, write, and execute permissions&lt;br /&gt;
* The next group (r-x) shows permissions for the group (myusername_users). The group here has read and execute permissions, but cannot write.&lt;br /&gt;
* The last group (---) shows permissions for the rest of the world. The world has no permissions to read, write, or execute.&lt;br /&gt;
&lt;br /&gt;
When you create a shell script with a text editor, and sometimes when you copy programs to Beocat via SCP, the execute flag is not set. The permissions string may look more like (-rw-r--r--). To change this, you need to give yourself permission to execute this program. This is done with the 'chmod' (change mode) command. 'chmod' can have a long and confusing syntax, but since by far the most common problem is to give yourself execute permissions, here is the command to change that:&lt;br /&gt;
 chmod u+x MyProgram&lt;br /&gt;
This changes the permissions so that the user ('u', i.e., the owner) adds ('+') execute permission ('x').&lt;br /&gt;
&lt;br /&gt;
For more complex ownership or permissions changes, please feel free to contact the Beocat staff.&lt;br /&gt;
&lt;br /&gt;
=== Manual (man) pages ===&lt;br /&gt;
Most commands have a complex set of switches that will modify the amount or type of information they display. To find out what switches are available, or how a program expects data, you can use the manual pages by typing &amp;quot;`man` ''command''&amp;quot;. Using one of the most common Linux commands, take a look the output of 'man ls'. It shows that it has over 50 switches available, ranging from which files to include, to how to display file sizes, to sort order and more. (I'm not pasting it here, because it's over 200 lines long!) To navigate a 'manpage', use the up-arrow and down-arrow keys. Press 'q' to quit.&lt;br /&gt;
&lt;br /&gt;
=== Pipes and Redirects ===&lt;br /&gt;
Typically a Linux program takes data from the keyboard and outputs data to the screen. In Unix and Linux terminology, the keyboard is the default 'stdin' (pronounced &amp;quot;standard in&amp;quot;) and the screen is the default 'stdout' (pronounced &amp;quot;standard out&amp;quot;). Many times, we want to take data from somewhere else (like a file, or the output of another program) and send it to yet another location. These redirectors are:&lt;br /&gt;
{|&lt;br /&gt;
|cmd &amp;gt; filename&lt;br /&gt;
|Redirect output from cmd to filename ||&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;gt;&amp;gt; filename&lt;br /&gt;
|Redirect output from cmd and append to filename&lt;br /&gt;
|-&lt;br /&gt;
|cmd &amp;lt; filename&lt;br /&gt;
|Redirect input from cmd to filename&lt;br /&gt;
|-&lt;br /&gt;
| cmd1 &amp;amp;#124; cmd2&lt;br /&gt;
| Use the output from cmd1 as the input to cmd2&lt;br /&gt;
|}&lt;br /&gt;
Here is a quick example. Let's say I have a thousands of files in a directory, and I want a list of those that end in '.sh'&lt;br /&gt;
'ls' by itself scrolls so far I can't see even a fraction of them. So, I redirect the output to a file&lt;br /&gt;
 ls &amp;gt; ~/filelist.txt&lt;br /&gt;
That gives me all the files in the current folder and saves them in my home directory in 'filelist.txt'.&lt;br /&gt;
A quick look through the file in my favorite editor tells me this is still going to take too long, so I need another step. The 'grep' program is a commonly-used program to perform pattern matching. The syntax of 'grep' is beyond the scope of this document, but take my word for it that&lt;br /&gt;
 grep '\.sh$'&lt;br /&gt;
will return all lines that end in .sh.&lt;br /&gt;
&lt;br /&gt;
We can now redirect the input from grep to the file we just created:&lt;br /&gt;
 grep '\.sh$' &amp;lt; ~/filelist.txt&lt;br /&gt;
Great! We now have our list. However, we wanted to save this as filelist.txt, and instead we have another list that we have to copy-and-paste. Instead of redirecting to a file, we'll use the vertical bar '|' (which we often term a &amp;quot;pipe&amp;quot;) to send the output of one command to another.&lt;br /&gt;
 ls | grep '\.sh$' &amp;gt; ~/filelist.txt&lt;br /&gt;
This time the output of 'ls' is ''not'' redirected to a file, but is redirected to the next command (grep).  The output of grep (which is all our .sh files) instead of being sent to the screen is redirected to the file ~/filelist.txt.&lt;br /&gt;
&lt;br /&gt;
This example is a very simple demonstration of how pipes and redirects work. Many more examples with complex structures can be found at http://www.ibm.com/developerworks/linux/library/l-lpic1-v3-103-4/index.html&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=156</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=156"/>
		<updated>2015-12-18T16:40:05Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added &amp;quot;credits and accolades&amp;quot; section.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|HPC]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the [http://www.cis.ksu.edu/ Computing and Information Science] department. Beocat is available to any educational researcher in the state of Kansas (and his or her collaborators) without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of Linux servers coordinated by the [https://arc.liv.ac.uk/trac/SGE SGE] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.cis.ksu.edu/ http://ganglia.beocat.cis.ksu.edu/].&lt;br /&gt;
* A comparatively small [[Hadoop]] cluster&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.cis.ksu.edu/ https://account.beocat.cis.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''beocat#cis-ksu-edu''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to beocat.cis.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use SGE for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SGEBasics]] page for an introduction on how to submit your first job. If you are already familiar with SGE, we also have an [[AdvancedSGE]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cis.ksu.edu beocat@cis.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc). If the problem is occurring on a headnode, please be sure to include the name of the headnode. This can be found by running the &amp;lt;tt&amp;gt;hostname&amp;lt;/tt&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [http://freenode.net/using_the_network.shtml freenode chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' and/or '''kylehutson''' in your message, and it should grab our attention. Available from a web browser [[Special:WebChat|here.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat.&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;br /&gt;
&lt;br /&gt;
== Credits and Accolades ==&lt;br /&gt;
See the published credits and other accolades received by Beocat [[Credits|here]]&lt;br /&gt;
&lt;br /&gt;
== Upcoming Events ==&lt;br /&gt;
{{#widget:Google Calendar&lt;br /&gt;
|id=hek6gpeu4bg40tdb2eqdrlfiuo@group.calendar.google.com&lt;br /&gt;
|color=711616&lt;br /&gt;
|view=AGENDA&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Credits&amp;diff=155</id>
		<title>Credits</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Credits&amp;diff=155"/>
		<updated>2015-12-18T16:37:45Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Initial creation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Beocat has been credited in the following published works:&lt;br /&gt;
&lt;br /&gt;
http://pubs.acs.org/doi/abs/10.1021/acsnano.5b03592 - Predicting Adsorption Affinities of Small Molecules on Carbon Nanotubes Using Molecular Dynamics Simulation&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=153</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=153"/>
		<updated>2015-12-11T21:46:45Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added FAQ entry for those leaving K-State&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | beocat.cis.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /home || Shared || 350TB total || glusterfs on top of xfs || Good enough for most jobs&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || ext4 || Good for I/O intensive jobs&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SGE_JOBID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable&amp;quot; or &amp;quot;nokillable&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (-l killable) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=12:00:00). Some users still feel this is a hinderance, so we created another flag to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;nokillable&amp;quot; resource ===&lt;br /&gt;
Nokillable (-l nokillable) simply makes it so that the jsv will not automatically mark the job killable. If you mark both killable and nokillable, killable will win.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SGEBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run qsub for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;-l h_rt=10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In fact, even the administrators cannot change the run-time requirement of a particular job. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Perl doesn't like getting called straight from the scheduler. However, there is a fairly easy workaround. Create a shell wrapper script that calls perl and its program.&lt;br /&gt;
&lt;br /&gt;
For instance, I can create a script called runperl.sh that looks like this:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 #$ -l h_rt=1:00:00,mem=1G&lt;br /&gt;
 /usr/bin/perl /path/to/my/perl_program.pl&lt;br /&gt;
&lt;br /&gt;
Make this wrapper program executable:&lt;br /&gt;
 chmod 755 runperl.sh&lt;br /&gt;
&lt;br /&gt;
Then submit it with&lt;br /&gt;
 qsub runperl.sh&lt;br /&gt;
&lt;br /&gt;
Of course, the name of this script isn't important, as long as you change the corresponding chmod and qsub commands.&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;-l infinband=TRUE&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== When I try to connect to Beocat, I get a message that includes &amp;quot;Host key verification failed.&amp;quot; ==&lt;br /&gt;
There are two ways you might get this.&lt;br /&gt;
&lt;br /&gt;
1) Somebody is trying to intercept your traffic and is impersonating Beocat (this is unlikely, but not impossible)&lt;br /&gt;
&lt;br /&gt;
2) You haven't logged into Beocat since we updated our host keys in May of 2015&lt;br /&gt;
&lt;br /&gt;
The following message was sent to the CIS-BEOCAT listserv in May.&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Hello all,&lt;br /&gt;
&lt;br /&gt;
Another quick note, as you might run into this now, I've created new ssh private/public keys for the headnodes. This means that you will likely see warnings from your ssh/scp/sftp clients that the public key has changed (starting now). This is a good thing, but you will likely have to tell your client in some way that the new key is to be accepted. Here are the public keys, if you would like to verify any of them.&lt;br /&gt;
&lt;br /&gt;
ssh-dss AAAAB3NzaC1kc3MAAACBAOOneUDA3MuU4O9WWGtxCwzjkWn1vx0+b2BZmJyxc99Rpb6mP2Cd3CxvUK5Az+ZP9EoyZM/QnC0dVcckA/qY0RbJaSbLVr0X2SmCDbaBgGRyjVWDzYYPCZYzrtGCAqqijXXGFu/yF1+wZtypI9KUZbimOSQwYBcVJ58sRqLyzqVZAAAAFQCxwPmNASHLDNCU9yftxb/yxOywdwAAAIEA3xJl2vL8UPvy2K55vYnqq0f3ATbPsmRzIggQTOSXaYSB4s25Bbr3TnIrUinf8hWgMx62x3tUaadqEe1ZV0hGWr3sVz1v85OnoyT4mPUNt6znyE/HfgusGBxOXrmxSEUUQoI2N8SoX6suLVE8r4rI+7SvrKLb1nIlE5ZE4EVjjwYAAACALxvBWxnDC49uLE12hC05JFyFl1GMUXsF+SZz5UD/g7h5YDl1VTQnj1dUll8ctbnveQFU8tgoroJerpkK9mlWZ3hWqdHZ39v1lknhUfSsgdX03kZmbU6JPGI3q3kkt5d6EbP/Pty71lHgT7nZuIq2e6Rzy7zA+7nUbYXarGlhRwM=&lt;br /&gt;
root@beocat.cis.ksu.edu&lt;br /&gt;
&lt;br /&gt;
ecdsa-sha2-nistp256&lt;br /&gt;
AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN/sgGK9t6p+qlD2msubeRgdfV27LaH9RBwGjNehWJzKEC/2TG7acjnw844txkpmJp2aVBRXXM4jwI0XMsq3tI4=&lt;br /&gt;
root@beocat.cis.ksu.edu&lt;br /&gt;
&lt;br /&gt;
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQtf9WC7om+SNZe6Zoh5J7qEYY44ci9jOC663P6NUdO&lt;br /&gt;
root@beocat.cis.ksu.edu&lt;br /&gt;
&lt;br /&gt;
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCnrnLkdAQ+j0YS/cy2DU7magFr5ldB98hs9AdHxTHHI7ynl1Bf9hZrk4J6cDft7OLmNZ215A2pOASFdj40kaOFsalHjKvjlIWUeZ4tqCHTigl0pHJ/Ysr7teF4v8D8xyQqt3SaqxCtYwsWUKjoIo8UAbS3LMzB9RNjsC8c6iVggpl04M+Q9zwqTqS+ApnQGwlA3JhZNiipnVwa/eI49jfouub2JaZNgUBimTW0yaU8f2dshUrDcMTh6BWgxqboaYk7WbyzF0o4fWTTFG8iGQo15B7sqcibBl1IePb4nfQWMPRubxzy2sg9o/+/h9bEubjQoR4luCPW1k85F/pxJevV&lt;br /&gt;
root@beocat.cis.ksu.edu&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That was then followed up by this message:&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sorry for the bother, again, but this has come up a few times. To remove the existing ssh identity, with openssh (Linux and OSX), you should run the following (and then try to login to Beocat again):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;ssh-keygen -R beocat.cis.ksu.edu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Putty and some of other applications have different methods. Ideally, you will be asked if you want to accept the new public key. If you are, you will probably want to accept.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== What happens to my data when I leave K-State? ==&lt;br /&gt;
First of all, although we use eid credentials, we are not tied in with K-State's central IT policies which apply to employees or students leaving the university. As long as you keep your eid password current, you still have access to Beocat. Once we deem your data to be &amp;quot;stale&amp;quot;, we will archive your data and disable your account. We have no written policy on when we do this, because we only do so as necessity dictates, but generally speaking if you have any data which is modified for less than two years will not be marked as stale. If your account is disabled for this reason, you will have to apply for a new account and un-archive your data.&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area. If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change the group to the name assigned by the Beocat admins&lt;br /&gt;
** &amp;lt;tt&amp;gt;chgrp -R $group_name $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the directory writeable and sticky for the group&lt;br /&gt;
** &amp;lt;tt&amp;gt;chmod g+ws $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Change your umask to 002 (there will probably be a setting for it in your file transfer utilities, also). This step needs to be done by all group members.&lt;br /&gt;
** &amp;lt;tt&amp;gt;umask 002&amp;lt;/tt&amp;gt; needs to go above &amp;lt;tt&amp;gt;&amp;lt;nowiki&amp;gt;if [[ $- != *i* ]] ; then&amp;lt;/nowiki&amp;gt;&amp;lt;/tt&amp;gt; in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file&lt;br /&gt;
&lt;br /&gt;
* Finally logout and log back in&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man qsub&amp;lt;/code&amp;gt;'. This will bring up the manual for qsub.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=149</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=FAQ&amp;diff=149"/>
		<updated>2015-10-02T20:55:56Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added SSH hostkey FAQ entry since we've been asked several times.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== How do I connect to Beocat ==&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! colspan=&amp;quot;2&amp;quot; | Connection Settings&lt;br /&gt;
|-&lt;br /&gt;
! Hostname &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | beocat.cis.ksu.edu&lt;br /&gt;
|-&lt;br /&gt;
! Port &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | 22&lt;br /&gt;
|-&lt;br /&gt;
! Username &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID&amp;lt;/tt&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
! Password &lt;br /&gt;
| style=&amp;quot;text-align:right&amp;quot; | &amp;lt;tt&amp;gt;eID Password&amp;lt;/tt&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;2&amp;quot; | Supported Connection Software (Latest Versions of Each)&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;3&amp;quot; | Shell&lt;br /&gt;
|-&lt;br /&gt;
| [http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html Putty]&lt;br /&gt;
|-&lt;br /&gt;
| ssh from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;4&amp;quot; | File Transfer Utilities&lt;br /&gt;
|-&lt;br /&gt;
| [https://filezilla-project.org/ Filezilla]&lt;br /&gt;
|-&lt;br /&gt;
| [http://winscp.net/ WinSCP]&lt;br /&gt;
|-&lt;br /&gt;
| scp and sftp from openssh&lt;br /&gt;
|-&lt;br /&gt;
!rowspan=&amp;quot;2&amp;quot; | Combination&lt;br /&gt;
|-&lt;br /&gt;
| [http://mobaxterm.mobatek.net/ MobaXterm]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== How do I compile my programs? ==&lt;br /&gt;
=== Serial programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;ifort&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;gfortran&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;icc&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;gcc&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;g++&amp;lt;/tt&amp;gt;&lt;br /&gt;
=== Parallel programs ===&lt;br /&gt;
==== Fortran ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpif77&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpif90&amp;lt;/tt&amp;gt;&lt;br /&gt;
==== C/C++ ====&lt;br /&gt;
&amp;lt;tt&amp;gt;mpicc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;mpic++&amp;lt;/tt&amp;gt;&lt;br /&gt;
== How are the filesystems on Beocat set up? ==&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Mountpoint !! Local / Shared !! Size !! Filesystem !! Advice&lt;br /&gt;
|-&lt;br /&gt;
| /home || Shared || 350TB total || glusterfs on top of xfs || Good enough for most jobs&lt;br /&gt;
|-&lt;br /&gt;
| /tmp || Local || &amp;gt;100GB (varies per node) || ext4 || Good for I/O intensive jobs&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
=== Usage Advice ===&lt;br /&gt;
For most jobs you shouldn't need to worry, your default working directory&lt;br /&gt;
is your homedir and it will be fast enough for most tasks.&lt;br /&gt;
I/O intensive work should use /tmp, but you will need to remember to copy&lt;br /&gt;
your files to and from this partition as part of your job script.  This is made&lt;br /&gt;
easier through the &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; environment variable in your jobs.&lt;br /&gt;
&lt;br /&gt;
Example usage of &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; in a job script&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
#copy our input file to $TMPDIR to make processing faster&lt;br /&gt;
cp ~/experiments/input.data $TMPDIR&lt;br /&gt;
&lt;br /&gt;
#use the input file we copied over to the local system&lt;br /&gt;
#generate the output file in $TMPDIR as well&lt;br /&gt;
~/bin/my_program --input-file=$TMPDIR/input.data --output-file=$TMPDIR/output.data&lt;br /&gt;
&lt;br /&gt;
#copy the results back from $TMPDIR&lt;br /&gt;
cp $TMPDIR/output.data ~/experiments/results.$SGE_JOBID&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You need to remember to copy over your data from &amp;lt;tt&amp;gt;$TMPDIR&amp;lt;/tt&amp;gt; as part of your job.&lt;br /&gt;
That directory and its contents are deleted when the job is complete.&lt;br /&gt;
&lt;br /&gt;
== What is &amp;quot;killable&amp;quot; or &amp;quot;nokillable&amp;quot; ==&lt;br /&gt;
On Beocat, some of the machines have been purchased by specific users and/or groups. These users and/or groups get guaranteed access to their machines at any point in time. Often, these machines are sitting idle because the owners have no need for it at the time. This would be a significant waste of computational power if there were no other way to make use of the computing cycles.&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;killable&amp;quot; resource ===&lt;br /&gt;
Killable (-l killable) jobs are jobs that can be scheduled to these &amp;quot;owned&amp;quot; machines by users outside of the true group of owners. If a &amp;quot;killable&amp;quot; job starts on one of these owned machines and the owner of said machine comes along and submits a job, the &amp;quot;killable&amp;quot; job will be returned to the queue, (killed off as it were), and restarted at some future point in time. The job will still complete at some future point, and if the job makes use of a checkpointing algorithm it may complete even faster. The trade off between marking a job &amp;quot;killable&amp;quot; and not, is that sometimes applications need a significant amount of runtime, and cannot resume running from a partial output, meaning that it may get restarted over and over again, never reaching the finish line. As such, we only auto-enable &amp;quot;killable&amp;quot; for relatively short jobs (&amp;lt;=12:00:00). Some users still feel this is a hinderance, so we created another flag to tell us not to automatically mark short jobs &amp;quot;killable&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Enter the &amp;quot;nokillable&amp;quot; resource ===&lt;br /&gt;
Nokillable (-l nokillable) simply makes it so that the jsv will not automatically mark the job killable. If you mark both killable and nokillable, killable will win.&lt;br /&gt;
&lt;br /&gt;
=== The trade-off ===&lt;br /&gt;
If a job is marked killable, there are a non-trivial amount of additional nodes that the job can run on. If your job checkpoints itself, or is relatively short, there should be no downside to marking the job killable, as the job will probably start sooner. If your job is long-running and doesn't checkpoint (save its state to restart a previous session) itself, it could cause your job to take longer to complete.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;Warning To stay compliant with standard unix behavior, there should be a valid #! line in your script i.e. #!/bin/tcsh&amp;quot; ==&lt;br /&gt;
Job submission scripts are supposed to have a line similar to '&amp;lt;code&amp;gt;#!/bin/bash&amp;lt;/code&amp;gt;' in them to start. We have had problems with people submitting jobs with invalid #! lines, so we enforce that rule. When this happens the job fails and we have to manually clean it up. The warning message is there just to inform you that the job script should have a line in it, in most cases #!/bin/tcsh or #!/bin/bash, to indicate what program should be used to run the script. When the line is missing from a script, by default your default shell is used to execute the script (in your case /usr/local/bin/tcsh). This works in most cases, but may not be what you are wanting.&lt;br /&gt;
&lt;br /&gt;
== Help! When I submit my jobs I get &amp;quot;A #! line exists, but it is not pointing to an executable. Please fix. Job not submitted.&amp;quot; ==&lt;br /&gt;
Like the above, error says you need a #!/bin/bash or similar line in your job script. This error says that while the line exists, the #! line isn't mentioning an executable file, thus the script will not be able to run. Most likely you wanted #!/bin/bash instead of something else.&lt;br /&gt;
&lt;br /&gt;
== Help! My jobs keep dying after 1 hour and I don't know why ==&lt;br /&gt;
Beocat has default runtime limit of 1 hour. If you need more than that, or need more than 1 GB of memory per core, you'll want to look at the documentation [[SGEBasics|here]] to see how to request it.&lt;br /&gt;
&lt;br /&gt;
In short, when you run qsub for your job, you'll want to put something along the lines of '&amp;lt;code&amp;gt;-l h_rt=10:00:00&amp;lt;/code&amp;gt;' before the job script if you want your job to run for 10 hours.&lt;br /&gt;
&lt;br /&gt;
== Help my error file has &amp;quot;Warning: no access to tty&amp;quot; ==&lt;br /&gt;
The warning message &amp;quot;Warning: no access to tty (Bad file descriptor)&amp;quot; is safe to ignore. It typically happens with the tcsh shell.&lt;br /&gt;
&lt;br /&gt;
== Help! My job isn't going to finish in the time I specified. Can I change the time requirement? ==&lt;br /&gt;
Generally speaking, no.&lt;br /&gt;
&lt;br /&gt;
Jobs are scheduled based on execution times (among other things). If it were easy to change your time requirement, one could submit a job with a 15-minute run-time, get it scheduled quickly, and then say &amp;quot;whoops - I meant 15 weeks&amp;quot;, effectively gaming the job scheduler. In fact, even the administrators cannot change the run-time requirement of a particular job. In extreme circumstances and depending on the job requirements, we '''may''' be able to manually intervene. This process prevents other users from using the node(s) you are currently using, so are not routinely approved. Contact Beocat support (below) if you feel your circumstances warrant special consideration.&lt;br /&gt;
&lt;br /&gt;
== Help! My perl job runs fine on the head node, but only runs for a few seconds and then quits when submitted to the queue. ==&lt;br /&gt;
Perl doesn't like getting called straight from the scheduler. However, there is a fairly easy workaround. Create a shell wrapper script that calls perl and its program.&lt;br /&gt;
&lt;br /&gt;
For instance, I can create a script called runperl.sh that looks like this:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 #$ -l h_rt=1:00:00,mem=1G&lt;br /&gt;
 /usr/bin/perl /path/to/my/perl_program.pl&lt;br /&gt;
&lt;br /&gt;
Make this wrapper program executable:&lt;br /&gt;
 chmod 755 runperl.sh&lt;br /&gt;
&lt;br /&gt;
Then submit it with&lt;br /&gt;
 qsub runperl.sh&lt;br /&gt;
&lt;br /&gt;
Of course, the name of this script isn't important, as long as you change the corresponding chmod and qsub commands.&lt;br /&gt;
&lt;br /&gt;
== Help! When using mpi I get 'CMA: no RDMA devices found' or 'A high-performance Open MPI point-to-point messaging module was unable to find any relevant network interfaces' ==&lt;br /&gt;
This message simply means that some but not all nodes the job is running on have infiniband cards. The job will still run, but will not use the fastest interconnect we have available. This may or may not be an issue, depending on how message heavy your job is. If you would like to not see this warning, you may request infiniband as a resource when submitting your job. &amp;lt;code&amp;gt;-l infinband=TRUE&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== When I try to connect to Beocat, I get a message that includes &amp;quot;Host key verification failed.&amp;quot; ==&lt;br /&gt;
There are two ways you might get this.&lt;br /&gt;
&lt;br /&gt;
1) Somebody is trying to intercept your traffic and is impersonating Beocat (this is unlikely, but not impossible)&lt;br /&gt;
&lt;br /&gt;
2) You haven't logged into Beocat since we updated our host keys in May of 2015&lt;br /&gt;
&lt;br /&gt;
The following message was sent to the CIS-BEOCAT listserv in May.&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Hello all,&lt;br /&gt;
&lt;br /&gt;
Another quick note, as you might run into this now, I've created new ssh private/public keys for the headnodes. This means that you will likely see warnings from your ssh/scp/sftp clients that the public key has changed (starting now). This is a good thing, but you will likely have to tell your client in some way that the new key is to be accepted. Here are the public keys, if you would like to verify any of them.&lt;br /&gt;
&lt;br /&gt;
ssh-dss AAAAB3NzaC1kc3MAAACBAOOneUDA3MuU4O9WWGtxCwzjkWn1vx0+b2BZmJyxc99Rpb6mP2Cd3CxvUK5Az+ZP9EoyZM/QnC0dVcckA/qY0RbJaSbLVr0X2SmCDbaBgGRyjVWDzYYPCZYzrtGCAqqijXXGFu/yF1+wZtypI9KUZbimOSQwYBcVJ58sRqLyzqVZAAAAFQCxwPmNASHLDNCU9yftxb/yxOywdwAAAIEA3xJl2vL8UPvy2K55vYnqq0f3ATbPsmRzIggQTOSXaYSB4s25Bbr3TnIrUinf8hWgMx62x3tUaadqEe1ZV0hGWr3sVz1v85OnoyT4mPUNt6znyE/HfgusGBxOXrmxSEUUQoI2N8SoX6suLVE8r4rI+7SvrKLb1nIlE5ZE4EVjjwYAAACALxvBWxnDC49uLE12hC05JFyFl1GMUXsF+SZz5UD/g7h5YDl1VTQnj1dUll8ctbnveQFU8tgoroJerpkK9mlWZ3hWqdHZ39v1lknhUfSsgdX03kZmbU6JPGI3q3kkt5d6EbP/Pty71lHgT7nZuIq2e6Rzy7zA+7nUbYXarGlhRwM=&lt;br /&gt;
root@beocat.cis.ksu.edu&lt;br /&gt;
&lt;br /&gt;
ecdsa-sha2-nistp256&lt;br /&gt;
AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN/sgGK9t6p+qlD2msubeRgdfV27LaH9RBwGjNehWJzKEC/2TG7acjnw844txkpmJp2aVBRXXM4jwI0XMsq3tI4=&lt;br /&gt;
root@beocat.cis.ksu.edu&lt;br /&gt;
&lt;br /&gt;
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQtf9WC7om+SNZe6Zoh5J7qEYY44ci9jOC663P6NUdO&lt;br /&gt;
root@beocat.cis.ksu.edu&lt;br /&gt;
&lt;br /&gt;
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCnrnLkdAQ+j0YS/cy2DU7magFr5ldB98hs9AdHxTHHI7ynl1Bf9hZrk4J6cDft7OLmNZ215A2pOASFdj40kaOFsalHjKvjlIWUeZ4tqCHTigl0pHJ/Ysr7teF4v8D8xyQqt3SaqxCtYwsWUKjoIo8UAbS3LMzB9RNjsC8c6iVggpl04M+Q9zwqTqS+ApnQGwlA3JhZNiipnVwa/eI49jfouub2JaZNgUBimTW0yaU8f2dshUrDcMTh6BWgxqboaYk7WbyzF0o4fWTTFG8iGQo15B7sqcibBl1IePb4nfQWMPRubxzy2sg9o/+/h9bEubjQoR4luCPW1k85F/pxJevV&lt;br /&gt;
root@beocat.cis.ksu.edu&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
That was then followed up by this message:&lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
Sorry for the bother, again, but this has come up a few times. To remove the existing ssh identity, with openssh (Linux and OSX), you should run the following (and then try to login to Beocat again):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;ssh-keygen -R beocat.cis.ksu.edu&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Putty and some of other applications have different methods. Ideally, you will be asked if you want to accept the new public key. If you are, you will probably want to accept.&lt;br /&gt;
&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Common Storage For Projects ==&lt;br /&gt;
Sometimes it is useful for groups of people to have a common storage area. If you already have a project you can do the following:&lt;br /&gt;
&lt;br /&gt;
* Create a directory in one of the home directories of someone in your group, ideally the project owner's.&lt;br /&gt;
** &amp;lt;tt&amp;gt;mkdir $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change the group to the name assigned by the Beocat admins&lt;br /&gt;
** &amp;lt;tt&amp;gt;chgrp -R $group_name $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Set the directory writeable and sticky for the group&lt;br /&gt;
** &amp;lt;tt&amp;gt;chmod g+ws $directory&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Change your umask to 002 (there will probably be a setting for it in your file transfer utilities, also). This step needs to be done by all group members.&lt;br /&gt;
** &amp;lt;tt&amp;gt;umask 002&amp;lt;/tt&amp;gt; needs to go above &amp;lt;tt&amp;gt;&amp;lt;nowiki&amp;gt;if [[ $- != *i* ]] ; then&amp;lt;/nowiki&amp;gt;&amp;lt;/tt&amp;gt; in your &amp;lt;tt&amp;gt;~/.bashrc&amp;lt;/tt&amp;gt; file&lt;br /&gt;
&lt;br /&gt;
* Finally logout and log back in&lt;br /&gt;
&lt;br /&gt;
== How do I get more help? ==&lt;br /&gt;
There are many sources of help for most Linux systems.&lt;br /&gt;
&lt;br /&gt;
=== Unix man pages ===&lt;br /&gt;
Linux provides man pages (short for manual pages). These are simple enough to call, for example: if you need information on submitting jobs to Beocat, you can type '&amp;lt;code&amp;gt;man qsub&amp;lt;/code&amp;gt;'. This will bring up the manual for qsub.&lt;br /&gt;
&lt;br /&gt;
=== GNU info system ===&lt;br /&gt;
Not all applications have &amp;quot;man pages.&amp;quot; Most of the rest have what they call info pages. For example, if you needed information on finding a file you could use '&amp;lt;code&amp;gt;info find&amp;lt;/code&amp;gt;'.&lt;br /&gt;
&lt;br /&gt;
=== This documentation ===&lt;br /&gt;
This documentation is very thoroughly researched, and has been painstakingly assembled for your benefit. Please use it.&lt;br /&gt;
&lt;br /&gt;
=== Contact support ===&lt;br /&gt;
Support can be contacted [mailto:beocat@cis.ksu.edu here]. Please include detailed information about your problem, including the job number, applications you are trying to run, and the current directory that you are in.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Compute_Nodes&amp;diff=138</id>
		<title>Compute Nodes</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Compute_Nodes&amp;diff=138"/>
		<updated>2015-06-24T16:21:05Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Deleted Paladins since they were removed from service.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We currently have three classes of compute nodes. Starting with the oldest first we have&lt;br /&gt;
&lt;br /&gt;
== Mages ==&lt;br /&gt;
[1,3,5,7,9,11] - Why are these numbered like this? There are actually 12 physical machines, however each pair (1 and 2, 3 and 4, etc.) is tied together with external [http://en.wikipedia.org/wiki/Intel_QuickPath_Interconnect QPI], making them appear as a single node.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|Processors&lt;br /&gt;
|8x 10-Core Xeon E7-8870&lt;br /&gt;
|-&lt;br /&gt;
|Ram&lt;br /&gt;
|1024GB&lt;br /&gt;
|-&lt;br /&gt;
|Hard Drive&lt;br /&gt;
|2x 300GB Hitachi 10,000rpm SAS&lt;br /&gt;
|-&lt;br /&gt;
|NIC 0&lt;br /&gt;
|Broadcom NetXtreme II BCM5709&lt;br /&gt;
|-&lt;br /&gt;
|NIC 1&lt;br /&gt;
|Broadcom NetXtreme II BCM5709&lt;br /&gt;
|-&lt;br /&gt;
|NIC 2&lt;br /&gt;
|Broadcom NetXtreme II BCM5709&lt;br /&gt;
|-&lt;br /&gt;
|NIC 3&lt;br /&gt;
|Broadcom NetXtreme II BCM5709&lt;br /&gt;
|-&lt;br /&gt;
| 10GbE and QDR Infiniband&lt;br /&gt;
|Mellanox Technologies MT27500 Family [ConnectX-3]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Elves ==&lt;br /&gt;
[1-42]&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|Processors&lt;br /&gt;
|2x 8-Core Xeon E5-2690&lt;br /&gt;
|-&lt;br /&gt;
|Ram&lt;br /&gt;
|64GB&lt;br /&gt;
|-&lt;br /&gt;
|Hard Drive&lt;br /&gt;
|1x 250GB 7,200 RPM SATA&lt;br /&gt;
|-&lt;br /&gt;
|NICs&lt;br /&gt;
|4x Intel I350&lt;br /&gt;
|-&lt;br /&gt;
| 10GbE and QDR Infiniband&lt;br /&gt;
|Mellanox Technologies MT27500 Family [ConnectX-3]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[43-56]&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|Processors&lt;br /&gt;
|2x 8-Core Xeon E5-2690&lt;br /&gt;
|-&lt;br /&gt;
|Ram&lt;br /&gt;
|64GB&lt;br /&gt;
|-&lt;br /&gt;
|Hard Drive&lt;br /&gt;
|1x 250GB 7,200 RPM SATA&lt;br /&gt;
|-&lt;br /&gt;
|NICs&lt;br /&gt;
|4x Intel I350&lt;br /&gt;
|-&lt;br /&gt;
|10GbE and QDR Infiniband&lt;br /&gt;
|Mellanox Technologies MT27500 Family [ConnectX-3]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[57-72]&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|Processors&lt;br /&gt;
|2x 10-Core Xeon E5-2690 v2&lt;br /&gt;
|-&lt;br /&gt;
|Ram&lt;br /&gt;
|96GB&lt;br /&gt;
|-&lt;br /&gt;
|Hard Drive&lt;br /&gt;
|1x 250GB 7,200 RPM SATA&lt;br /&gt;
|-&lt;br /&gt;
|NICs&lt;br /&gt;
|4x Intel I350&lt;br /&gt;
|-&lt;br /&gt;
|10GbE and QDR Infiniband&lt;br /&gt;
|Mellanox Technologies MT27500 Family [ConnectX-3]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[73-76]&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|Processors&lt;br /&gt;
|2x 10-Core Xeon E5-2690 v2&lt;br /&gt;
|-&lt;br /&gt;
|Ram&lt;br /&gt;
|384GB&lt;br /&gt;
|-&lt;br /&gt;
|Hard Drive&lt;br /&gt;
|1x 250GB 7,200 RPM SATA&lt;br /&gt;
|-&lt;br /&gt;
|NICs&lt;br /&gt;
|4x Intel I350&lt;br /&gt;
|-&lt;br /&gt;
|10GbE and QDR Infiniband&lt;br /&gt;
|Mellanox Technologies MT27500 Family [ConnectX-3]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Heroes ==&lt;br /&gt;
[1-36]&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| Processors&lt;br /&gt;
| 2x 12-Core Xeon E5-2680 v3&lt;br /&gt;
|-&lt;br /&gt;
| Ram&lt;br /&gt;
| 128GB&lt;br /&gt;
|-&lt;br /&gt;
| Hard Drive&lt;br /&gt;
|1x 1TB 7,200 RPM SATA&lt;br /&gt;
|-&lt;br /&gt;
|NICs&lt;br /&gt;
|2x Intel I350&lt;br /&gt;
|-&lt;br /&gt;
|40GbE&lt;br /&gt;
| Mellanox Technologies MT27500 Family [ConnectX-3]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[37-44]&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
| Processors&lt;br /&gt;
| 2x 12-Core Xeon E5-2680 v3&lt;br /&gt;
|-&lt;br /&gt;
| Ram&lt;br /&gt;
| 512GB&lt;br /&gt;
|-&lt;br /&gt;
| Hard Drive&lt;br /&gt;
|1x 1TB 7,200 RPM SATA&lt;br /&gt;
|-&lt;br /&gt;
|NICs&lt;br /&gt;
|2x Intel I350&lt;br /&gt;
|-&lt;br /&gt;
|40GbE&lt;br /&gt;
| Mellanox Technologies MT27500 Family [ConnectX-3]&lt;br /&gt;
|-&lt;br /&gt;
| Additional Notes&lt;br /&gt;
| 2x Xeon Phi FPU&lt;br /&gt;
|}&lt;br /&gt;
[[Category:Information]]&lt;br /&gt;
[[Category:Hardware]]&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=90</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=90"/>
		<updated>2014-07-17T20:27:53Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added link to SiPE.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|HPC]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the [http://www.cis.ksu.edu/ Computing and Information Science] department. Beocat is available to any educational researcher in the state of Kansas without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of Linux servers coordinated by the [https://arc.liv.ac.uk/trac/SGE SGE] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.cis.ksu.edu/ http://ganglia.beocat.cis.ksu.edu/].&lt;br /&gt;
* A comparatively small [[Hadoop]] cluster&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.cis.ksu.edu/ https://account.beocat.cis.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''beocat#cis-ksu-edu''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to beocat.cis.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use SGE for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SGEBasics]] page for an introduction on how to submit your first job. If you are already familiar with SGE, we also have an [[AdvancedSGE]] page where we can adjust the fine-tuning. If you're new to HPC, we highly recommend the [http://www.oscer.ou.edu/education.php Supercomputing in Plain English (SiPE)] series by OU. In particular, the older course's streaming videos are an excellent resource, even if you do not complete the exercises.&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips and Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cis.ksu.edu beocat@cis.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc).&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [http://freenode.net/using_the_network.shtml freenode chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' and/or '''kylehutson''' in your message, and it should grab our attention. Available from a web browser [[Special:WebChat|here.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat.&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Tips_and_Tricks&amp;diff=85</id>
		<title>Tips and Tricks</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Tips_and_Tricks&amp;diff=85"/>
		<updated>2014-07-09T21:44:48Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added &amp;quot;Programming for Performance&amp;quot; section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Beocat has a number of tools to make your work easier, some which you may not know about. This is a simple list of these programs and some basic usage scenarios.&lt;br /&gt;
&lt;br /&gt;
== Submitting your job to run the fastest ==&lt;br /&gt;
=== Size your jobs to use the fastest nodes ===&lt;br /&gt;
==== Specify the proper number of cores ====&lt;br /&gt;
Beocat (nor any other computer or cluster) can make your job run on more than one core at a time if your program isn't designed to take advantage of this. Many people think &amp;quot;I can run this on 40 cores and it will run 40 times faster&amp;quot;. This isn't true.&lt;br /&gt;
&lt;br /&gt;
While we have many programs that are designed to take advantage of multiple cores, do not assume this is the case&lt;br /&gt;
&lt;br /&gt;
==== Optimize your jobs for speed, not for number of cores ====&lt;br /&gt;
It seems that many people pick an arbitrary large number of cores for their jobs. 20 seems to be a common one. However, some of our fastest nodes have 16 cores. It's quite likely if your job will fit on an Elf (16 cores, 8 GB/RAM/core (64 GB RAM total)), it will run faster with 16 cores than by specifying more cores and having it run on slower nodes.&lt;br /&gt;
&lt;br /&gt;
==== Don't request resources you don't need ====&lt;br /&gt;
The most common culprit here is people specifying they need infiniband when the job is run on a single node. This limits the scheduling such that a perfectly good node for your job may be idle while your job is still waiting.&lt;br /&gt;
&lt;br /&gt;
== Programs that make using Beocat easier ==&lt;br /&gt;
=== [[wikipedia:nmon|nmon]] ===&lt;br /&gt;
The name is short for &amp;quot;Nigel's Monitor&amp;quot;, it's a program written by Nigel Griffiths from IBM.&lt;br /&gt;
=== [http://www.ibm.com/developerworks/aix/library/au-nmon_analyser/ nmon analyser] ===&lt;br /&gt;
A tool for producing graphs and spreadsheets from output generated by nmon.&lt;br /&gt;
=== [http://hisham.hm/htop/ htop] ===&lt;br /&gt;
A prettier, easier to use top. Shows CPU and memory usage in an easy-to-digest format.&lt;br /&gt;
=== [http://www.gnu.org/software/screen/ screen] ===&lt;br /&gt;
A sort of terminal multiplexer, allows you to run many terminal programs at once without mixing them up. Also allows you to disconnect and reconnect sessions. There is a good explanation of how to use screen at [http://www.mattcutts.com/blog/a-quick-tutorial-on-screen/ http://www.mattcutts.com/blog/a-quick-tutorial-on-screen/].&lt;br /&gt;
=== Ganglia ===&lt;br /&gt;
The web-based load monitoring tool for the cluster. [http://ganglia.beocat.cis.ksu.edu http://ganglia.beocat.cis.ksu.edu] . From there, you can see how busy Beocat is.&lt;br /&gt;
=== [http://dag.wieers.com/home-made/dstat/ dstat] ===&lt;br /&gt;
A very detailed performance analyzer.&lt;br /&gt;
&lt;br /&gt;
== Increasing file write performance ==&lt;br /&gt;
Credit for this goes to [http://moo.nac.uci.edu/~hjm/bduc/BDUC_USER_HOWTO.html#writeperfongl http://moo.nac.uci.edu/~hjm/bduc/BDUC_USER_HOWTO.html#writeperfongl]&lt;br /&gt;
&lt;br /&gt;
=== Use gzip ===&lt;br /&gt;
If you have written your own code or are using an app that writes zillions of tiny chunks of data to STDOUT, and you are storing the results on Beocat, you should consider passing the output thru gzip to consolidate the writes into a continuous stream. If you don’t do this, each write will be considered a separate IO event and the write performance will suffer.&lt;br /&gt;
&lt;br /&gt;
If, however, the STDOUT is passed thru gzip, the wallclock runtime decreases even below the usual runtime and you end up with an output file that it already compressed to about 1/5 the usual size.&lt;br /&gt;
&lt;br /&gt;
The here’s how to do it:&lt;br /&gt;
&lt;br /&gt;
 someapp --opt1 --opt2 --input=/path/to/input_file | gzip &amp;gt; /path/to/output_file&lt;br /&gt;
=== Use named pipes ===&lt;br /&gt;
Named pipes are special files that don't actually write to the filesystem, and can be used to communicate between processess. Since these pipes are in memory rather than directly to disk, they can be used to buffer writes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Create the named pipe&lt;br /&gt;
mkfifo /path/to/MyNamedPipe&lt;br /&gt;
&lt;br /&gt;
# Write some data to it&lt;br /&gt;
MyProgram --infile=/path/to/InputData1 --outfile=/path/to/MyNamedPipe &amp;amp;&lt;br /&gt;
MyOtherProgram &amp;lt; /path/to/InputData2 &amp;gt; /path/to/MyNamedPipe&lt;br /&gt;
&lt;br /&gt;
# Extract the output&lt;br /&gt;
cat &amp;lt; /path/to/MyNamedPipe &amp;gt; $HOME/MyOutput&lt;br /&gt;
## OR, we could compress the output&lt;br /&gt;
gzip &amp;lt; /path/to/MyNamedPipe &amp;gt; $HOME/MyOutput.gz&lt;br /&gt;
&lt;br /&gt;
# Delete the named pipe like you would a file&lt;br /&gt;
rm /path/to/MyNamedPipe&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
One cautionary word. Unlike normal files, named pipes cannot be used between machines, but can be used among processes running on the same machine. So, if you're running an MPI job that will be running completely on one node, you could setup a named pipe and do all your writes to that pipe, and then flush it at the end, but if you're running a multi-node MPI job and your named pipe is on a shared filesystem (like $HOME), each process will need to flush its named pipe to a regular file before the job quits.&lt;br /&gt;
=== Use one big file instead of many small ones ===&lt;br /&gt;
This may seem to be a non-issue, but it's a performance problem we've seen on Beocat many times. I love the term coined by UCI at the link above. They call making many small files &amp;quot;Zillions Of Tiny files (ZOTfiles)&amp;quot;. Using files like this is an inefficient use of our shared resources. A tiny file by itself is no more inefficient than a huge one. If you have only 100bytes to store, store it in single file. However, the problems start compounding when there are many of them. Because of the way data is stored on disk, 10 MB stored in ZOTfiles of 100bytes each can easily take up NOT 10MB, but more than 400MB - 40 times more space. Worse, data stored in this manner makes many operations very slow - instead of looking up 1 directory entry, the OS has to look up 100,000. This means 100,000 times more disk head movement, with a concommittent decrease in performance and disk lifetime. We have had Beocat users with several million files of less than 1kB each. Just creating a directory listing with ls would take nearly 1/2 hour. Not only is that inefficient for you, but it also degrades the performance of everybody using that filesystem and degrades our backups as well.&lt;br /&gt;
&lt;br /&gt;
Please use large files instead of ZOTfiles any chance you can!&lt;br /&gt;
== Programming for Performance ==&lt;br /&gt;
=== BLAS ===&lt;br /&gt;
BLAS (Basic Linear Algebra Subroutines) is a standard set of linear algebra subroutines. The standard was set so that software could be written against a standardized library interface, and optimized libraries could be &amp;quot;plug-and-play.&amp;quot; There are lots of implementations of the BLAS libraries, with the most common ones being [http://software.intel.com/en-us/intel-mkl/ Intel's MKL] and [http://developer.amd.com/tools/cpu/acml/pages/default.aspx AMD's ACML]. We have AMD's ACML installed and used by default, as it is free software and doesn't require a paid license. Intel's MKL requires per-user licensing, and thus we haven't paid for it.&lt;br /&gt;
&lt;br /&gt;
==== Beocat BLAS Libraries ====&lt;br /&gt;
Since BLAS is a modular standard, we have installed a few (free) BLAS libraries.&lt;br /&gt;
&lt;br /&gt;
* The BLAS reference library: An unoptimized reference library&lt;br /&gt;
* AMD's ACML: Optimized BLAS library for AMD systems&lt;br /&gt;
* OpenBLAS: Optimized BLAS library for some AMD, and most Intel sytems&lt;br /&gt;
The default BLAS library is ACML.&lt;br /&gt;
&lt;br /&gt;
==== Using a different BLAS library ====&lt;br /&gt;
If you want or need to use a different BLAS library, list the available libraries with 'eselect blas list'&lt;br /&gt;
&lt;br /&gt;
 $ eselect blas list&lt;br /&gt;
 Available providers for blas:&lt;br /&gt;
  [1]   acml64-ifort *&lt;br /&gt;
  [2]   acml64-ifort-fma4&lt;br /&gt;
  [3]   acml64-ifort-fma4-openmp&lt;br /&gt;
  [4]   acml64-ifort-openmp&lt;br /&gt;
  [5]   openblas-openmp&lt;br /&gt;
  [6]   reference&lt;br /&gt;
To change your default BLAS version you need to determine which shell you are using:&lt;br /&gt;
&lt;br /&gt;
===== CSH or TCSH =====&lt;br /&gt;
Run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
eselect blas script --csh openblas-openmp&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Where the openblas-openmp is replaced with name of your preferred BLAS. Put the output of that script in you job script, or in your ~/.cshrc file.&lt;br /&gt;
&lt;br /&gt;
===== SH, BASH, or ZSH =====&lt;br /&gt;
Run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
eselect blas script --sh openblas-openmp&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Where the openblas-openmp is replaced with name of your preferred BLAS. Put the output of that script in your job script, or in your ~/.bashrc or ~/.zshrc file.&lt;br /&gt;
=== LAPACK ===&lt;br /&gt;
LAPACK (Linear Algebra PACKage) is a standard set of linear algebra subroutines. Like BLAS, these are very optimized, but LAPACK handles a different set of functions. The standard was set so that software could be written against a standardized library interface, and optimized libraries could be &amp;quot;plug-and-play.&amp;quot; There are lots of implementations of the LAPACK libraries, with the most common ones being [http://software.intel.com/en-us/intel-mkl/ Intel's MKL] and [http://developer.amd.com/tools/cpu/acml/pages/default.aspx AMD's ACML]. We have AMD's ACML installed and used by default, as it is free software and doesn't require a paid license. Intel's MKL requires per-user licensing, and thus we haven't paid for it.&lt;br /&gt;
&lt;br /&gt;
==== Beocat LAPACK Libraries ====&lt;br /&gt;
Since LAPACK is a modular standard, we have installed a few (free) LAPACK libraries.&lt;br /&gt;
* [[http://www.netlib.org/lapack/|The LAPACK reference library]]: An unoptimized reference library&lt;br /&gt;
* [[http://developer.amd.com/tools/cpu/acml/pages/default.aspx|AMD's ACML]]: Optimized LAPACK library for AMD systems&lt;br /&gt;
&lt;br /&gt;
The default LAPACK library is ACML.&lt;br /&gt;
&lt;br /&gt;
==== Using a different LAPACK library ====&lt;br /&gt;
If you want or need to use a different LAPACK library, list the available libraries with 'eselect lapack list'.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ eselect lapack list&lt;br /&gt;
Available providers for lapack:&lt;br /&gt;
  [1]   acml64-ifort *&lt;br /&gt;
  [2]   acml64-ifort-fma4&lt;br /&gt;
  [3]   acml64-ifort-fma4-openmp&lt;br /&gt;
  [4]   acml64-ifort-openmp&lt;br /&gt;
  [5]   reference&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
To change your default LAPACK version you need to determine which shell you are using:&lt;br /&gt;
&lt;br /&gt;
===== CSH or TCSH =====&lt;br /&gt;
Run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
eselect lapack script --csh acml-ifort-openmp&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Where the acml-ifort-openmp is replaced with name of your preferred LAPACK.&lt;br /&gt;
Put the output of that script in you job script, or in your ~/.cshrc file.&lt;br /&gt;
&lt;br /&gt;
===== SH, BASH, or ZSH =====&lt;br /&gt;
Run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
eselect lapack script --sh acml-ifort-openmp&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Where the acml-ifort-openmp is replaced with name of your preferred LAPACK.&lt;br /&gt;
Put the output of that script in you job script, or in your ~/.bashrc or ~/.zshrc file.&lt;br /&gt;
&lt;br /&gt;
=== [http://openmp.org/wp/ OpenMP] ===&lt;br /&gt;
OpenMP is a set of directives for C, C++, and Fortran which greatly simplifies parallelizing applications on a single node. There is a good tutorial for OpenMP at [https://computing.llnl.gov/tutorials/openMP/ https://computing.llnl.gov/tutorials/openMP/]&lt;br /&gt;
To compile an OpenMP-enabled program, you need to tell GCC that OpenMP is available this is done like:&lt;br /&gt;
 gcc -fopenmp myOpenMPprogram.c&lt;br /&gt;
By default OpenMP will use all available cores for its computation, which is a problem for shared resources like Beocat.&lt;br /&gt;
&lt;br /&gt;
To make use of only the cores assigned to you, you must first make sure you have requested the 'single' parallel environment and in your job script you will need something like the following (before the application you are trying to run):&lt;br /&gt;
&lt;br /&gt;
==== bash, sh, zsh ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
export OMP_NUM_THREADS=${NSLOTS}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== csh or tcsh ====&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
setenv OMP_NUM_THREADS ${NSLOTS}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=84</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=84"/>
		<updated>2014-07-09T21:36:47Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Removed OpenMP to move to Tips and Tricks&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Drinking from the Firehose ==&lt;br /&gt;
For a complete list of all installed software, see [[NodePackageList]]&lt;br /&gt;
&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
Version 1.4.3&lt;br /&gt;
&lt;br /&gt;
=== [http://www.scilab.org Scilab] ===&lt;br /&gt;
Version 5.4.0&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
Version 3.0.3&lt;br /&gt;
&lt;br /&gt;
==== Modules ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own modules ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;rsplus&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;rsplus&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;qsub myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSGE#Running_from_a_qsub_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.qsub&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #$ -l mem=1G&lt;br /&gt;
 # Now we tell qsub how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
 #$ -l h_rt=0:15:00&lt;br /&gt;
 &lt;br /&gt;
 # Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
 R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
qsub submit-R.qsub&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
Versions 1.6 and 1.7&lt;br /&gt;
&lt;br /&gt;
We support 4 versions of the Java VM on Beocat. [[wikipedia:IcedTea|IcedTea]] 6 and 7 (based on [[wikipedia:OpenJDK|OpenJDK]]), Sun JDK 1.6 (Java 6), and Oracle JDK 1.7 (Java 7).&lt;br /&gt;
&lt;br /&gt;
We allow each user to select his or her Java version individually. If you do not select one, we default to Sun JDK 1.7.&lt;br /&gt;
&lt;br /&gt;
==== Selecting your Java version ====&lt;br /&gt;
First, lets list the available versions. This can be done with the command &amp;lt;code&amp;gt;eselect java-vm list&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% eselect java-vm list&lt;br /&gt;
Available Java Virtual Machines:&lt;br /&gt;
  [1]   icedtea-bin-6&lt;br /&gt;
  [2]   icedtea-bin-7&lt;br /&gt;
  [3]   oracle-jdk-bin-1.7  system-vm&lt;br /&gt;
  [4]   sun-jdk-1.6&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you'll note,  oracle-jdk-bin-1.7 (marked &amp;quot;system-vm&amp;quot;) is the default for all users. If you have a custom version set, it will be marked with &amp;quot;user-vm&amp;quot;. Now if you wanted to use icedtea-6, you could run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
eselect java-vm set user 1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now, we see the difference when running the above command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% eselect java-vm list&lt;br /&gt;
Available Java Virtual Machines:&lt;br /&gt;
  [1]   icedtea-bin-6  user-vm&lt;br /&gt;
  [2]   icedtea-bin-7&lt;br /&gt;
  [3]   oracle-jdk-bin-1.7  system-vm&lt;br /&gt;
  [4]   sun-jdk-1.6&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To verify you are seeing the correct java, you can run &amp;lt;code&amp;gt;java -version&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% java -version&lt;br /&gt;
java version &amp;quot;1.6.0_27&amp;quot;&lt;br /&gt;
OpenJDK Runtime Environment (IcedTea6 1.12.7) (Gentoo build 1.6.0_27-b27)&lt;br /&gt;
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
&lt;br /&gt;
We have several versions of Python available:&lt;br /&gt;
* [http://docs.python.org/2.7/ CPython 2.7]&lt;br /&gt;
* [http://docs.python.org/3.2/ CPython 3.2]&lt;br /&gt;
* [http://pypy.org/ PyPy] versions 1.9 (Python 2.7.2) and 2.0.2 (Python 2.7.3)&lt;br /&gt;
&lt;br /&gt;
For the uninitiated PyPy provides [[wikipedia:Just-in-time_compilation|just-in-time compilation]] for python code. While it doesn't support all modules, code which does run under PyPy can see a significant performance increase.&lt;br /&gt;
&lt;br /&gt;
If you just need python and its default modules, you can use python2 python3 pypy-c1.9 or pypy-c2.0 as you would any other application.&lt;br /&gt;
&lt;br /&gt;
If, however, you need modules that we do not have installed, you should use [http://www.doughellmann.com/projects/virtualenvwrapper/ virtualenvwrapper] to setup a virtual python environment in your home directory. This will let you install python modules as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
* [[LinuxBasics#Shells|Change your shell]] to bash&lt;br /&gt;
* Make sure ~/.bash_profile exists&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
if [ ! -f ~/.bash_profile ]; then cp /etc/skel/.bash_profile ~/.bash_profile; fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Add a line like &amp;lt;code&amp;gt;source /usr/bin/virtualenvwrapper.sh&amp;lt;/code&amp;gt; to your .bash_profile.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
echo &amp;quot;source /usr/bin/virtualenvwrapper.sh&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Show your existing environments&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
workon&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test', a python2 virtual environment called 'testp2', a python3 virtual environment called 'testp3', and a pypy environment called testpypy. Note that &amp;lt;code&amp;gt;mkvirtualenv --help&amp;lt;/code&amp;gt; has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 mkvirtualenv -p $(which python2) testp2&lt;br /&gt;
 mkvirtualenv -p $(which python3) testp3&lt;br /&gt;
 mkvirtualenv -p $(which pypy-c2.0) testpypy&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%workon&lt;br /&gt;
testp2&lt;br /&gt;
testp3&lt;br /&gt;
testpypy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%workon testp2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment testp2&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
source /usr/bin/virtualenvwrapper.sh&lt;br /&gt;
workon testp2&lt;br /&gt;
~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== A note on [http://www.numpy.org/ NumPy] ====&lt;br /&gt;
NumPy is a commonly-used Python package.&lt;br /&gt;
&lt;br /&gt;
Make sure the following is executed before running &amp;lt;code&amp;gt;pip install numpy&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cp /opt/beocat/numpy/.numpy-site.cfg ~/.numpy-site.cfg&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== A note on [http://mpi4py.scipy.org/docs/usrman/index.html mpi4py] ====&lt;br /&gt;
If you are wanting to use mpi with your python script and are using a virtual environment, you will need to send the correct environment variables to all of the mpi processes to make the virtual environment work.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# sample mpi4py submit script&lt;br /&gt;
source /usr/bin/virtualenvwrapper.sh&lt;br /&gt;
workon testp2&lt;br /&gt;
# figure out the location of the python interpreter in the virtual environment&lt;br /&gt;
PYTHON_BINARY=$(which python)&lt;br /&gt;
# mpirun the python interpreter within the virtual environment&lt;br /&gt;
# if you don't use the interpreter within the virtual environment, i.e. just using 'python'&lt;br /&gt;
# the system python interpreter (without access to your other modules) will be used.&lt;br /&gt;
mpirun ${PYTHON_BINARY} ~/path/to/your/mpi-enabled/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;qsub myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSGE#Running_from_a_qsub_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#$ -l mem=1G&lt;br /&gt;
# Now we tell qsub how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#$ -l h_rt=0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Getting Perl with threads ====&lt;br /&gt;
* Setup perlbrew&lt;br /&gt;
** [[LinuxBasics#Shells|Change your shell]] to bash&lt;br /&gt;
** Install perlbrew&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -L http://install.perlbrew.pl | bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** Make sure that ~/.bash_profile exists&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
if [ ! -f ~/.bash_profile ]; then cp /etc/skel/.bash_profile ~/.bash_profile; fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** Add &amp;lt;code&amp;gt;source ~/perl5/perlbrew/etc/bashrc&amp;lt;/code&amp;gt; to ~/.bash_profile&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
echo &amp;quot;source ~/perl5/perlbrew/etc/bashrc&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** Then source your bash profile&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Now, install perl with threads within perlbrew&lt;br /&gt;
** Find the current Perl version.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% perl -version&lt;br /&gt;
&lt;br /&gt;
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux&lt;br /&gt;
(with 22 registered patches, see perl -V for more detail)&lt;br /&gt;
(...several more lines deleted)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
** In this case the version is 5.16.3, so we run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew install -f -n -D usethreads perl-5.16.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** To temporarily use the new version of perl in the current shell, we now run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew use perl-5.16.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** To switch versions of perl for every new login or job, run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew switch perl-5.16.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** You can reverse this switch with&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew switch-off&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=81</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=81"/>
		<updated>2014-07-09T20:35:15Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added &amp;quot;writing and installing&amp;quot; section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|HPC]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the [http://www.cis.ksu.edu/ Computing and Information Science] department. Beocat is available to any educational researcher in the state of Kansas without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of Linux servers coordinated by the [https://arc.liv.ac.uk/trac/SGE SGE] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.cis.ksu.edu/ http://ganglia.beocat.cis.ksu.edu/].&lt;br /&gt;
* A comparatively small [[Hadoop]] cluster&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.cis.ksu.edu/ https://account.beocat.cis.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''beocat#cis-ksu-edu''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to beocat.cis.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use SGE for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SGEBasics]] page for an introduction on how to submit your first job. If you are already familiar with SGE, we also have an [[AdvancedSGE]] page where we can adjust the fine-tuning.&lt;br /&gt;
&lt;br /&gt;
== Writing and Installing Software on Beocat ==&lt;br /&gt;
* If you are writing software for Beocat and it is in an installed scripting language like R, Perl, or Python, please look at our [[Installed_software]] page to see what we have available and any usage guidelines we have posted there.&lt;br /&gt;
* If you need to write compiled code such as Fortran, C, or C++, we offer both GNU and Intel compilers. See our [[FAQ]] for more details.&lt;br /&gt;
* In either case, we suggest you head to our [[Tips_and_Tricks]] page for helpful hints.&lt;br /&gt;
* If you wish to install software in your home directory, we have a [[Training_Videos#Installing_files_in_your_Home_Directory|video]] showing how to do this.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]]. If you're just getting started our [[Training_Videos]] might be useful to you.&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cis.ksu.edu beocat@cis.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc).&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [http://freenode.net/using_the_network.shtml freenode chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' and/or '''kylehutson''' in your message, and it should grab our attention. Available from a web browser [[Special:WebChat|here.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat.&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Tips_and_Tricks&amp;diff=80</id>
		<title>Tips and Tricks</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Tips_and_Tricks&amp;diff=80"/>
		<updated>2014-07-09T20:14:17Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Initial creation / port from old support site.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Beocat has a number of tools to make your work easier, some which you may not know about. This is a simple list of these programs and some basic usage scenarios.&lt;br /&gt;
&lt;br /&gt;
== Submitting your job to run the fastest ==&lt;br /&gt;
=== Size your jobs to use the fastest nodes ===&lt;br /&gt;
==== Specify the proper number of cores ====&lt;br /&gt;
Beocat (nor any other computer or cluster) can make your job run on more than one core at a time if your program isn't designed to take advantage of this. Many people think &amp;quot;I can run this on 40 cores and it will run 40 times faster&amp;quot;. This isn't true.&lt;br /&gt;
&lt;br /&gt;
While we have many programs that are designed to take advantage of multiple cores, do not assume this is the case&lt;br /&gt;
&lt;br /&gt;
==== Optimize your jobs for speed, not for number of cores ====&lt;br /&gt;
It seems that many people pick an arbitrary large number of cores for their jobs. 20 seems to be a common one. However, some of our fastest nodes have 16 cores. It's quite likely if your job will fit on an Elf (16 cores, 8 GB/RAM/core (64 GB RAM total)), it will run faster with 16 cores than by specifying more cores and having it run on slower nodes.&lt;br /&gt;
&lt;br /&gt;
==== Don't request resources you don't need ====&lt;br /&gt;
The most common culprit here is people specifying they need infiniband when the job is run on a single node. This limits the scheduling such that a perfectly good node for your job may be idle while your job is still waiting.&lt;br /&gt;
&lt;br /&gt;
== Programs that make using Beocat easier ==&lt;br /&gt;
=== [[wikipedia:nmon|nmon]] ===&lt;br /&gt;
The name is short for &amp;quot;Nigel's Monitor&amp;quot;, it's a program written by Nigel Griffiths from IBM.&lt;br /&gt;
=== [http://www.ibm.com/developerworks/aix/library/au-nmon_analyser/ nmon analyser] ===&lt;br /&gt;
A tool for producing graphs and spreadsheets from output generated by nmon.&lt;br /&gt;
=== [http://hisham.hm/htop/ htop] ===&lt;br /&gt;
A prettier, easier to use top. Shows CPU and memory usage in an easy-to-digest format.&lt;br /&gt;
=== [http://www.gnu.org/software/screen/ screen] ===&lt;br /&gt;
A sort of terminal multiplexer, allows you to run many terminal programs at once without mixing them up. Also allows you to disconnect and reconnect sessions. There is a good explanation of how to use screen at [http://www.mattcutts.com/blog/a-quick-tutorial-on-screen/ http://www.mattcutts.com/blog/a-quick-tutorial-on-screen/].&lt;br /&gt;
=== Ganglia ===&lt;br /&gt;
The web-based load monitoring tool for the cluster. [http://ganglia.beocat.cis.ksu.edu http://ganglia.beocat.cis.ksu.edu] . From there, you can see how busy Beocat is.&lt;br /&gt;
=== [http://dag.wieers.com/home-made/dstat/ dstat] ===&lt;br /&gt;
A very detailed performance analyzer.&lt;br /&gt;
&lt;br /&gt;
== Increasing file write performance ==&lt;br /&gt;
Credit for this goes to [http://moo.nac.uci.edu/~hjm/bduc/BDUC_USER_HOWTO.html#writeperfongl http://moo.nac.uci.edu/~hjm/bduc/BDUC_USER_HOWTO.html#writeperfongl]&lt;br /&gt;
&lt;br /&gt;
=== Use gzip ===&lt;br /&gt;
If you have written your own code or are using an app that writes zillions of tiny chunks of data to STDOUT, and you are storing the results on Beocat, you should consider passing the output thru gzip to consolidate the writes into a continuous stream. If you don’t do this, each write will be considered a separate IO event and the write performance will suffer.&lt;br /&gt;
&lt;br /&gt;
If, however, the STDOUT is passed thru gzip, the wallclock runtime decreases even below the usual runtime and you end up with an output file that it already compressed to about 1/5 the usual size.&lt;br /&gt;
&lt;br /&gt;
The here’s how to do it:&lt;br /&gt;
&lt;br /&gt;
 someapp --opt1 --opt2 --input=/path/to/input_file | gzip &amp;gt; /path/to/output_file&lt;br /&gt;
=== Use named pipes ===&lt;br /&gt;
Named pipes are special files that don't actually write to the filesystem, and can be used to communicate between processess. Since these pipes are in memory rather than directly to disk, they can be used to buffer writes:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# Create the named pipe&lt;br /&gt;
mkfifo /path/to/MyNamedPipe&lt;br /&gt;
&lt;br /&gt;
# Write some data to it&lt;br /&gt;
MyProgram --infile=/path/to/InputData1 --outfile=/path/to/MyNamedPipe &amp;amp;&lt;br /&gt;
MyOtherProgram &amp;lt; /path/to/InputData2 &amp;gt; /path/to/MyNamedPipe&lt;br /&gt;
&lt;br /&gt;
# Extract the output&lt;br /&gt;
cat &amp;lt; /path/to/MyNamedPipe &amp;gt; $HOME/MyOutput&lt;br /&gt;
## OR, we could compress the output&lt;br /&gt;
gzip &amp;lt; /path/to/MyNamedPipe &amp;gt; $HOME/MyOutput.gz&lt;br /&gt;
&lt;br /&gt;
# Delete the named pipe like you would a file&lt;br /&gt;
rm /path/to/MyNamedPipe&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
One cautionary word. Unlike normal files, named pipes cannot be used between machines, but can be used among processes running on the same machine. So, if you're running an MPI job that will be running completely on one node, you could setup a named pipe and do all your writes to that pipe, and then flush it at the end, but if you're running a multi-node MPI job and your named pipe is on a shared filesystem (like $HOME), each process will need to flush its named pipe to a regular file before the job quits.&lt;br /&gt;
=== Use one big file instead of many small ones ===&lt;br /&gt;
This may seem to be a non-issue, but it's a performance problem we've seen on Beocat many times. I love the term coined by UCI at the link above. They call making many small files &amp;quot;Zillions Of Tiny files (ZOTfiles)&amp;quot;. Using files like this is an inefficient use of our shared resources. A tiny file by itself is no more inefficient than a huge one. If you have only 100bytes to store, store it in single file. However, the problems start compounding when there are many of them. Because of the way data is stored on disk, 10 MB stored in ZOTfiles of 100bytes each can easily take up NOT 10MB, but more than 400MB - 40 times more space. Worse, data stored in this manner makes many operations very slow - instead of looking up 1 directory entry, the OS has to look up 100,000. This means 100,000 times more disk head movement, with a concommittent decrease in performance and disk lifetime. We have had Beocat users with several million files of less than 1kB each. Just creating a directory listing with ls would take nearly 1/2 hour. Not only is that inefficient for you, but it also degrades the performance of everybody using that filesystem and degrades our backups as well.&lt;br /&gt;
&lt;br /&gt;
Please use large files instead of ZOTfiles any chance you can!&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=79</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=79"/>
		<updated>2014-07-09T16:10:16Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Updated Perl and R submit scripts&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Drinking from the Firehose ==&lt;br /&gt;
For a complete list of all installed software, see [[NodePackageList]]&lt;br /&gt;
&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
Version 1.4.3&lt;br /&gt;
&lt;br /&gt;
=== [http://openmp.org/wp/ OpenMP] ===&lt;br /&gt;
OpenMP isn't really a software package itself. It is a set of directives for C, C++, and Fortran which greatly simplifies parallelizing applications on a single node. There is a good tutorial for OpenMP at [https://computing.llnl.gov/tutorials/openMP/ https://computing.llnl.gov/tutorials/openMP/]&lt;br /&gt;
&lt;br /&gt;
=== [http://www.scilab.org Scilab] ===&lt;br /&gt;
Version 5.4.0&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
Version 3.0.3&lt;br /&gt;
&lt;br /&gt;
==== Modules ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own modules ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;rsplus&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;rsplus&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. '&amp;lt;tt&amp;gt;qsub myscript.R&amp;lt;/tt&amp;gt;' will result in an error. Instead, you need to make a bash [[AdvancedSGE#Running_from_a_qsub_Submit_Script|script]] that will call R appropriately. Here is a minimal example. We'll save this as submit-R.qsub&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #$ -l mem=1G&lt;br /&gt;
 # Now we tell qsub how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
 #$ -l h_rt=0:15:00&lt;br /&gt;
 &lt;br /&gt;
 # Now lets do some actual work. This starts R and loads the file myscript.R&lt;br /&gt;
 R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
 &amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
qsub submit-R.qsub&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
Versions 1.6 and 1.7&lt;br /&gt;
&lt;br /&gt;
We support 4 versions of the Java VM on Beocat. [[wikipedia:IcedTea|IcedTea]] 6 and 7 (based on [[wikipedia:OpenJDK|OpenJDK]]), Sun JDK 1.6 (Java 6), and Oracle JDK 1.7 (Java 7).&lt;br /&gt;
&lt;br /&gt;
We allow each user to select his or her Java version individually. If you do not select one, we default to Sun JDK 1.7.&lt;br /&gt;
&lt;br /&gt;
==== Selecting your Java version ====&lt;br /&gt;
First, lets list the available versions. This can be done with the command &amp;lt;code&amp;gt;eselect java-vm list&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% eselect java-vm list&lt;br /&gt;
Available Java Virtual Machines:&lt;br /&gt;
  [1]   icedtea-bin-6&lt;br /&gt;
  [2]   icedtea-bin-7&lt;br /&gt;
  [3]   oracle-jdk-bin-1.7  system-vm&lt;br /&gt;
  [4]   sun-jdk-1.6&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you'll note,  oracle-jdk-bin-1.7 (marked &amp;quot;system-vm&amp;quot;) is the default for all users. If you have a custom version set, it will be marked with &amp;quot;user-vm&amp;quot;. Now if you wanted to use icedtea-6, you could run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
eselect java-vm set user 1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now, we see the difference when running the above command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% eselect java-vm list&lt;br /&gt;
Available Java Virtual Machines:&lt;br /&gt;
  [1]   icedtea-bin-6  user-vm&lt;br /&gt;
  [2]   icedtea-bin-7&lt;br /&gt;
  [3]   oracle-jdk-bin-1.7  system-vm&lt;br /&gt;
  [4]   sun-jdk-1.6&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To verify you are seeing the correct java, you can run &amp;lt;code&amp;gt;java -version&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% java -version&lt;br /&gt;
java version &amp;quot;1.6.0_27&amp;quot;&lt;br /&gt;
OpenJDK Runtime Environment (IcedTea6 1.12.7) (Gentoo build 1.6.0_27-b27)&lt;br /&gt;
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
&lt;br /&gt;
We have several versions of Python available:&lt;br /&gt;
* [http://docs.python.org/2.7/ CPython 2.7]&lt;br /&gt;
* [http://docs.python.org/3.2/ CPython 3.2]&lt;br /&gt;
* [http://pypy.org/ PyPy] versions 1.9 (Python 2.7.2) and 2.0.2 (Python 2.7.3)&lt;br /&gt;
&lt;br /&gt;
For the uninitiated PyPy provides [[wikipedia:Just-in-time_compilation|just-in-time compilation]] for python code. While it doesn't support all modules, code which does run under PyPy can see a significant performance increase.&lt;br /&gt;
&lt;br /&gt;
If you just need python and its default modules, you can use python2 python3 pypy-c1.9 or pypy-c2.0 as you would any other application.&lt;br /&gt;
&lt;br /&gt;
If, however, you need modules that we do not have installed, you should use [http://www.doughellmann.com/projects/virtualenvwrapper/ virtualenvwrapper] to setup a virtual python environment in your home directory. This will let you install python modules as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
* [[LinuxBasics#Shells|Change your shell]] to bash&lt;br /&gt;
* Make sure ~/.bash_profile exists&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
if [ ! -f ~/.bash_profile ]; then cp /etc/skel/.bash_profile ~/.bash_profile; fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Add a line like &amp;lt;code&amp;gt;source /usr/bin/virtualenvwrapper.sh&amp;lt;/code&amp;gt; to your .bash_profile.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
echo &amp;quot;source /usr/bin/virtualenvwrapper.sh&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Show your existing environments&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
workon&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test', a python2 virtual environment called 'testp2', a python3 virtual environment called 'testp3', and a pypy environment called testpypy. Note that &amp;lt;code&amp;gt;mkvirtualenv --help&amp;lt;/code&amp;gt; has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 mkvirtualenv -p $(which python2) testp2&lt;br /&gt;
 mkvirtualenv -p $(which python3) testp3&lt;br /&gt;
 mkvirtualenv -p $(which pypy-c2.0) testpypy&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%workon&lt;br /&gt;
testp2&lt;br /&gt;
testp3&lt;br /&gt;
testpypy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%workon testp2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment testp2&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
source /usr/bin/virtualenvwrapper.sh&lt;br /&gt;
workon testp2&lt;br /&gt;
~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== A note on [http://www.numpy.org/ NumPy] ====&lt;br /&gt;
NumPy is a commonly-used Python package.&lt;br /&gt;
&lt;br /&gt;
Make sure the following is executed before running &amp;lt;code&amp;gt;pip install numpy&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cp /opt/beocat/numpy/.numpy-site.cfg ~/.numpy-site.cfg&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== A note on [http://mpi4py.scipy.org/docs/usrman/index.html mpi4py] ====&lt;br /&gt;
If you are wanting to use mpi with your python script and are using a virtual environment, you will need to send the correct environment variables to all of the mpi processes to make the virtual environment work.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# sample mpi4py submit script&lt;br /&gt;
source /usr/bin/virtualenvwrapper.sh&lt;br /&gt;
workon testp2&lt;br /&gt;
# figure out the location of the python interpreter in the virtual environment&lt;br /&gt;
PYTHON_BINARY=$(which python)&lt;br /&gt;
# mpirun the python interpreter within the virtual environment&lt;br /&gt;
# if you don't use the interpreter within the virtual environment, i.e. just using 'python'&lt;br /&gt;
# the system python interpreter (without access to your other modules) will be used.&lt;br /&gt;
mpirun ${PYTHON_BINARY} ~/path/to/your/mpi-enabled/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
==== Submitting a job with Perl ====&lt;br /&gt;
Much like R (above), you cannot simply '&amp;lt;tt&amp;gt;qsub myProgram.pl&amp;lt;/tt&amp;gt;', but you must create a [[AdvancedSGE#Running_from_a_qsub_Submit_Script|submit script]] which will call perl. Here is an example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#$ -l mem=1G&lt;br /&gt;
# Now we tell qsub how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
#$ -l h_rt=0:15:00&lt;br /&gt;
# Now lets do some actual work. &lt;br /&gt;
perl /path/to/myProgram.pl&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Getting Perl with threads ====&lt;br /&gt;
* Setup perlbrew&lt;br /&gt;
** [[LinuxBasics#Shells|Change your shell]] to bash&lt;br /&gt;
** Install perlbrew&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -L http://install.perlbrew.pl | bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** Make sure that ~/.bash_profile exists&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
if [ ! -f ~/.bash_profile ]; then cp /etc/skel/.bash_profile ~/.bash_profile; fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** Add &amp;lt;code&amp;gt;source ~/perl5/perlbrew/etc/bashrc&amp;lt;/code&amp;gt; to ~/.bash_profile&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
echo &amp;quot;source ~/perl5/perlbrew/etc/bashrc&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** Then source your bash profile&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Now, install perl with threads within perlbrew&lt;br /&gt;
** Find the current Perl version.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% perl -version&lt;br /&gt;
&lt;br /&gt;
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux&lt;br /&gt;
(with 22 registered patches, see perl -V for more detail)&lt;br /&gt;
(...several more lines deleted)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
** In this case the version is 5.16.3, so we run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew install -f -n -D usethreads perl-5.16.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** To temporarily use the new version of perl in the current shell, we now run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew use perl-5.16.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** To switch versions of perl for every new login or job, run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew switch perl-5.16.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** You can reverse this switch with&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew switch-off&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=78</id>
		<title>Installed software</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Installed_software&amp;diff=78"/>
		<updated>2014-07-09T15:59:57Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Add OpenMP and fix some formatting&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Drinking from the Firehose ==&lt;br /&gt;
For a complete list of all installed software, see [[NodePackageList]]&lt;br /&gt;
&lt;br /&gt;
== Most Commonly Used Software ==&lt;br /&gt;
=== [http://www.open-mpi.org/ OpenMPI] ===&lt;br /&gt;
Version 1.4.3&lt;br /&gt;
&lt;br /&gt;
=== [http://openmp.org/wp/ OpenMP] ===&lt;br /&gt;
OpenMP isn't really a software package itself. It is a set of directives for C, C++, and Fortran which greatly simplifies parallelizing applications on a single node. There is a good tutorial for OpenMP at [https://computing.llnl.gov/tutorials/openMP/ https://computing.llnl.gov/tutorials/openMP/]&lt;br /&gt;
&lt;br /&gt;
=== [http://www.scilab.org Scilab] ===&lt;br /&gt;
Version 5.4.0&lt;br /&gt;
&lt;br /&gt;
=== [http://www.r-project.org/ R] ===&lt;br /&gt;
Version 3.0.3&lt;br /&gt;
&lt;br /&gt;
==== Modules ====&lt;br /&gt;
We provide a small number of R modules installed by default, these are generally modules that are needed by more than one person.&lt;br /&gt;
&lt;br /&gt;
==== Installing your own modules ====&lt;br /&gt;
To install your own module, login to Beocat and start R interactively&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
R&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Then install the package using&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;rsplus&amp;quot;&amp;gt;&lt;br /&gt;
install.packages(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Follow the prompts. Note that there is a CRAN mirror at KU - it will be listed as &amp;quot;USA (KS)&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
After installing you can test before leaving interactive mode by issuing the command&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;rsplus&amp;quot;&amp;gt;&lt;br /&gt;
library(&amp;quot;PACKAGENAME&amp;quot;)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Running R Jobs ====&lt;br /&gt;
&lt;br /&gt;
You cannot submit an R script directly. 'qsub myscript.R' will result in an error. Instead, you need to make a bash script that will call R appropriately. Here is a minimal example. We'll save this as submit-R.qsub&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 # First, lets tell the qsub command which resources we need&lt;br /&gt;
 # lets start with memory (in this case I ask for 1 gigabyte).&lt;br /&gt;
 # For help on these, see [[SGEBasics]]&lt;br /&gt;
 &lt;br /&gt;
 #$ -l mem=1G&lt;br /&gt;
 # Now we tell qsub how long we expect our work to take: 15 minutes (H:MM:SS)&lt;br /&gt;
 &lt;br /&gt;
 #$ -l h_rt=0:15:00&lt;br /&gt;
 &lt;br /&gt;
 # Lets output a little useful information This will put something like &amp;quot;Starting the job at: Thu Jan 26 10:43:26 CST 2012&amp;quot; in your output file&lt;br /&gt;
 echo -n &amp;quot;Starting the job at: &amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
 &lt;br /&gt;
 # Now lets do some actual work. A lot of our users use R, so we'll go over that&lt;br /&gt;
 # This starts R and loads the file myscript.R&lt;br /&gt;
 R --no-save -q &amp;lt; myscript.R&lt;br /&gt;
 &lt;br /&gt;
 # like before, this is just useful information&lt;br /&gt;
 echo -n &amp;quot;Ending the job at: &amp;quot;&lt;br /&gt;
 date&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now, to submit your R job, you would type&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
qsub submit-R.qsub&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== [http://www.java.com/ Java] ===&lt;br /&gt;
Versions 1.6 and 1.7&lt;br /&gt;
&lt;br /&gt;
We support 4 versions of the Java VM on Beocat. [[wikipedia:IcedTea|IcedTea]] 6 and 7 (based on [[wikipedia:OpenJDK|OpenJDK]]), Sun JDK 1.6 (Java 6), and Oracle JDK 1.7 (Java 7).&lt;br /&gt;
&lt;br /&gt;
We allow each user to select his or her Java version individually. If you do not select one, we default to Sun JDK 1.7.&lt;br /&gt;
&lt;br /&gt;
==== Selecting your Java version ====&lt;br /&gt;
First, lets list the available versions. This can be done with the command &amp;lt;code&amp;gt;eselect java-vm list&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% eselect java-vm list&lt;br /&gt;
Available Java Virtual Machines:&lt;br /&gt;
  [1]   icedtea-bin-6&lt;br /&gt;
  [2]   icedtea-bin-7&lt;br /&gt;
  [3]   oracle-jdk-bin-1.7  system-vm&lt;br /&gt;
  [4]   sun-jdk-1.6&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you'll note,  oracle-jdk-bin-1.7 (marked &amp;quot;system-vm&amp;quot;) is the default for all users. If you have a custom version set, it will be marked with &amp;quot;user-vm&amp;quot;. Now if you wanted to use icedtea-6, you could run the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
eselect java-vm set user 1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Now, we see the difference when running the above command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% eselect java-vm list&lt;br /&gt;
Available Java Virtual Machines:&lt;br /&gt;
  [1]   icedtea-bin-6  user-vm&lt;br /&gt;
  [2]   icedtea-bin-7&lt;br /&gt;
  [3]   oracle-jdk-bin-1.7  system-vm&lt;br /&gt;
  [4]   sun-jdk-1.6&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
To verify you are seeing the correct java, you can run &amp;lt;code&amp;gt;java -version&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% java -version&lt;br /&gt;
java version &amp;quot;1.6.0_27&amp;quot;&lt;br /&gt;
OpenJDK Runtime Environment (IcedTea6 1.12.7) (Gentoo build 1.6.0_27-b27)&lt;br /&gt;
OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
=== [http://www.python.org/about/ Python] ===&lt;br /&gt;
&lt;br /&gt;
We have several versions of Python available:&lt;br /&gt;
* [http://docs.python.org/2.7/ CPython 2.7]&lt;br /&gt;
* [http://docs.python.org/3.2/ CPython 3.2]&lt;br /&gt;
* [http://pypy.org/ PyPy] versions 1.9 (Python 2.7.2) and 2.0.2 (Python 2.7.3)&lt;br /&gt;
&lt;br /&gt;
For the uninitiated PyPy provides [[wikipedia:Just-in-time_compilation|just-in-time compilation]] for python code. While it doesn't support all modules, code which does run under PyPy can see a significant performance increase.&lt;br /&gt;
&lt;br /&gt;
If you just need python and its default modules, you can use python2 python3 pypy-c1.9 or pypy-c2.0 as you would any other application.&lt;br /&gt;
&lt;br /&gt;
If, however, you need modules that we do not have installed, you should use [http://www.doughellmann.com/projects/virtualenvwrapper/ virtualenvwrapper] to setup a virtual python environment in your home directory. This will let you install python modules as you please.&lt;br /&gt;
&lt;br /&gt;
==== Setting up your virtual environment ====&lt;br /&gt;
* [[LinuxBasics#Shells|Change your shell]] to bash&lt;br /&gt;
* Make sure ~/.bash_profile exists&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
if [ ! -f ~/.bash_profile ]; then cp /etc/skel/.bash_profile ~/.bash_profile; fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Add a line like &amp;lt;code&amp;gt;source /usr/bin/virtualenvwrapper.sh&amp;lt;/code&amp;gt; to your .bash_profile.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
echo &amp;quot;source /usr/bin/virtualenvwrapper.sh&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Show your existing environments&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
workon&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Create a virtual environment. Here I will create a default virtual environment called 'test', a python2 virtual environment called 'testp2', a python3 virtual environment called 'testp3', and a pypy environment called testpypy. Note that &amp;lt;code&amp;gt;mkvirtualenv --help&amp;lt;/code&amp;gt; has many more useful options.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 mkvirtualenv -p $(which python2) testp2&lt;br /&gt;
 mkvirtualenv -p $(which python3) testp3&lt;br /&gt;
 mkvirtualenv -p $(which pypy-c2.0) testpypy&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Lets look at our virtual environments&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%workon&lt;br /&gt;
testp2&lt;br /&gt;
testp3&lt;br /&gt;
testpypy&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* Activate one of these&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%workon testp2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* You can now install the python modules you want. This can be done using &amp;lt;tt&amp;gt;pip&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
pip install numpy biopython&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== Using your virtual environment within a job ====&lt;br /&gt;
Here is a simple job script using the virtual environment testp2&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
source /usr/bin/virtualenvwrapper.sh&lt;br /&gt;
workon testp2&lt;br /&gt;
~/path/to/your/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== A note on [http://www.numpy.org/ NumPy] ====&lt;br /&gt;
NumPy is a commonly-used Python package.&lt;br /&gt;
&lt;br /&gt;
Make sure the following is executed before running &amp;lt;code&amp;gt;pip install numpy&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
cp /opt/beocat/numpy/.numpy-site.cfg ~/.numpy-site.cfg&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
==== A note on [http://mpi4py.scipy.org/docs/usrman/index.html mpi4py] ====&lt;br /&gt;
If you are wanting to use mpi with your python script and are using a virtual environment, you will need to send the correct environment variables to all of the mpi processes to make the virtual environment work.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
# sample mpi4py submit script&lt;br /&gt;
source /usr/bin/virtualenvwrapper.sh&lt;br /&gt;
workon testp2&lt;br /&gt;
# figure out the location of the python interpreter in the virtual environment&lt;br /&gt;
PYTHON_BINARY=$(which python)&lt;br /&gt;
# mpirun the python interpreter within the virtual environment&lt;br /&gt;
# if you don't use the interpreter within the virtual environment, i.e. just using 'python'&lt;br /&gt;
# the system python interpreter (without access to your other modules) will be used.&lt;br /&gt;
mpirun ${PYTHON_BINARY} ~/path/to/your/mpi-enabled/python/script.py&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== [http://www.perl.org/ Perl] ===&lt;br /&gt;
The system-wide version of perl is tracking the stable releases of perl. Unfortunately there are some features that we do not include in the system distribution of perl, namely threads.&lt;br /&gt;
&lt;br /&gt;
==== Getting Perl with threads ====&lt;br /&gt;
* Setup perlbrew&lt;br /&gt;
** [[LinuxBasics#Shells|Change your shell]] to bash&lt;br /&gt;
** Install perlbrew&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
curl -L http://install.perlbrew.pl | bash&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** Make sure that ~/.bash_profile exists&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
if [ ! -f ~/.bash_profile ]; then cp /etc/skel/.bash_profile ~/.bash_profile; fi&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** Add &amp;lt;code&amp;gt;source ~/perl5/perlbrew/etc/bashrc&amp;lt;/code&amp;gt; to ~/.bash_profile&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
echo &amp;quot;source ~/perl5/perlbrew/etc/bashrc&amp;quot; &amp;gt;&amp;gt; ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** Then source your bash profile&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
source ~/.bash_profile&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
* Now, install perl with threads within perlbrew&lt;br /&gt;
** Find the current Perl version.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% perl -version&lt;br /&gt;
&lt;br /&gt;
This is perl 5, version 16, subversion 3 (v5.16.3) built for x86_64-linux&lt;br /&gt;
(with 22 registered patches, see perl -V for more detail)&lt;br /&gt;
(...several more lines deleted)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
** In this case the version is 5.16.3, so we run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew install -f -n -D usethreads perl-5.16.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** To temporarily use the new version of perl in the current shell, we now run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew use perl-5.16.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** To switch versions of perl for every new login or job, run&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew switch perl-5.16.3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
** You can reverse this switch with&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
perlbrew switch-off&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
== Installing my own software ==&lt;br /&gt;
Installing and maintaining software for the many different users of Beocat would be very difficult, if not impossible. For this reason, we don't generally install user-run software on our cluster. Instead, we ask that you install it into your home directories.&lt;br /&gt;
&lt;br /&gt;
In many cases, the software vendor or support site will incorrectly assume that you are installing the software system-wide or that you need 'sudo' access.&lt;br /&gt;
&lt;br /&gt;
As a quick example of installing software in your home directory, we have a sample video on our [[Training Videos]] page. If you're still having problems or questions, please contact support as mentioned on our [[Main Page]].&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=77</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Main_Page&amp;diff=77"/>
		<updated>2014-07-09T15:48:07Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Pointed to internal Hadoop documentation&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Beocat? ==&lt;br /&gt;
Beocat is the [[wikipedia:High-performance_computing|HPC]] cluster at [http://www.ksu.edu Kansas State University]. It is run by the [http://www.cis.ksu.edu/ Computing and Information Science] department. Beocat is available to any educational researcher in the state of Kansas without cost. Priority access is given to those researchers who have contributed resources.&lt;br /&gt;
&lt;br /&gt;
Beocat is actually comprised of several different cluster computing systems&lt;br /&gt;
* &amp;quot;Beocat&amp;quot;, as used by most people is a [[wikipedia:Beowulf cluster|Beowulf cluster]] of Linux servers coordinated by the [https://arc.liv.ac.uk/trac/SGE SGE] job submission and scheduling system. Our [[Compute Nodes]] (hardware) and [[installed software]] have separate pages on this wiki. The current status of this cluster can be monitored by visiting [http://ganglia.beocat.cis.ksu.edu/ http://ganglia.beocat.cis.ksu.edu/].&lt;br /&gt;
* A comparatively small [[Hadoop]] cluster&lt;br /&gt;
* A small [[wikipedia:Openstack|Openstack]] cloud-computing infrastructure&lt;br /&gt;
&lt;br /&gt;
== How Do I Use Beocat? ==&lt;br /&gt;
First, you need to get an account by visiting [https://account.beocat.cis.ksu.edu/ https://account.beocat.cis.ksu.edu/] and filling out the form. In most cases approval for the account will be granted in less than one business day, and sometimes much sooner. When your account has been approved, you will be added to our [[LISTSERV]], where we announce any changes, maintenance periods, or other issues.&lt;br /&gt;
&lt;br /&gt;
Once you have an account, you can access Beocat via SSH and can transfer files in or out via SCP or SFTP (or [https://www.globus.org/ Globus Connect] using the endpoint ''beocat#cis-ksu-edu''). If you don't know what those are, please see our [[LinuxBasics]] page. If you are familiar with these, connect your client to beocat.cis.ksu.edu and use your K-State eID credentials to login.&lt;br /&gt;
&lt;br /&gt;
As mentioned above, we use SGE for job submission and scheduling. If you've never worked with a batch-queueing system before, submitting a job is different than running on a standalone Linux machine. Please see our [[SGEBasics]] page for an introduction on how to submit your first job. If you are already familiar with SGE, we also have an [[AdvancedSGE]] page where we can adjust the fine-tuning.&lt;br /&gt;
&lt;br /&gt;
==  How do I get help? ==&lt;br /&gt;
You're in our support Wiki now, and that's a great place to start! We highly suggest that before you send us email, you visit our [[FAQ]].&lt;br /&gt;
&lt;br /&gt;
If your answer isn't there, you can email us at [mailto:beocat@cis.ksu.edu beocat@cis.ksu.edu]. ''Please'' send all email to this address and not to any of our staff directly. This will ensure your support request gets entered into our tracker, and will get your questions answered as quickly as possible. Please keep the subject line as descriptive as possible and include any pertinent details to your problem (i.e. job ids, commands run, working directory, program versions,.. etc).&lt;br /&gt;
&lt;br /&gt;
We are also available on IRC on the [http://freenode.net/using_the_network.shtml freenode chat servers] in the channel #beocat. This is useful ''especially'' if you have a quick question, as you'd be surprised the times when at least one of us is around. If you do have a question be sure to mention '''m0zes''' and/or '''kylehutson''' in your message, and it should grab our attention. Available from a web browser [[Special:WebChat|here.]]&lt;br /&gt;
&lt;br /&gt;
== How do I get priority access ==&lt;br /&gt;
We're glad you asked! Contact [mailto:dan@ksu.edu Dr. Dan Andresen] to find out how contributions to Beocat will prioritize your access to Beocat.&lt;br /&gt;
&lt;br /&gt;
== Policies ==&lt;br /&gt;
You can find our policies [[Policy|here]]&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Hadoop&amp;diff=76</id>
		<title>Hadoop</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Hadoop&amp;diff=76"/>
		<updated>2014-07-09T15:47:02Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Hadoop ==&lt;br /&gt;
[[http://hadoop.apache.org/ Hadoop]] is a &amp;quot;Big Data&amp;quot; distributed processing service. It is primarily used for very large data sets (greater than 1 TB).&lt;br /&gt;
&lt;br /&gt;
Hadoop does not integrate well with SGE (or, for that matter, any other HPC scheduling system). So we have created our own separate Cloudera Hadoop cluster to accommodate the increased usage of Hadoop on campus.&lt;br /&gt;
&lt;br /&gt;
To use Hadoop:&lt;br /&gt;
* Login to Beocat&lt;br /&gt;
* From there login to the Hadoop headnode, named 'theia'. &amp;lt;tt&amp;gt;ssh theia&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Copy files into or out of the Hadoop filesystem. Use &amp;lt;tt&amp;gt;hadoop fs put&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;hadoop fs get&amp;lt;/tt&amp;gt; to copy files. Note that the Hadoop filesystem is both smaller than the Beocat filesystem and is not backed up. Please copy data back out of Hadoop as soon as you are done using it. '''Data which remains untouched may be deleted with no prior notice.'''&lt;br /&gt;
* Run your Hadoop job. &amp;lt;tt&amp;gt;hadoop -jar path/to/file.jar&amp;lt;/tt&amp;gt;&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Hadoop&amp;diff=75</id>
		<title>Hadoop</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=Hadoop&amp;diff=75"/>
		<updated>2014-07-09T15:44:44Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Created page with &amp;quot;== Hadoop == Hadoop does not integrate well with SGE (or, for that matter, any other HPC scheduling system). So we have created our own separate Cloudera Hadoop cluster to acc...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Hadoop ==&lt;br /&gt;
Hadoop does not integrate well with SGE (or, for that matter, any other HPC scheduling system). So we have created our own separate Cloudera Hadoop cluster to accommodate the increased usage of Hadoop on campus.&lt;br /&gt;
&lt;br /&gt;
To use Hadoop:&lt;br /&gt;
* Login to Beocat&lt;br /&gt;
* From there login to the Hadoop headnode, named 'theia'. &amp;lt;tt&amp;gt;ssh theia&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Copy files into or out of the Hadoop filesystem. Use &amp;lt;tt&amp;gt;hadoop fs put&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;hadoop fs get&amp;lt;/tt&amp;gt; to copy files. Note that the Hadoop filesystem is both smaller than the Beocat filesystem and is not backed up. Please copy data back out of Hadoop as soon as you are done using it. '''Data which remains untouched may be deleted with no prior notice.'''&lt;br /&gt;
* Run your Hadoop job. &amp;lt;tt&amp;gt;hadoop -jar path/to/file.jar&amp;lt;/tt&amp;gt;&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OLD_DEPRECATED_AdvancedSGE&amp;diff=74</id>
		<title>OLD DEPRECATED AdvancedSGE</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OLD_DEPRECATED_AdvancedSGE&amp;diff=74"/>
		<updated>2014-07-09T14:43:06Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added SGE Environment Variables&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SGEBasics]] page, we have several other requestable resources. Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;qconf -sc | awk '{ if ($5 != &amp;quot;NO&amp;quot;) { print }}'&amp;lt;/tt&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
!name&lt;br /&gt;
!shortcut&lt;br /&gt;
!type&lt;br /&gt;
!relop&lt;br /&gt;
!requestable&lt;br /&gt;
!consumable&lt;br /&gt;
!default&lt;br /&gt;
!urgency&lt;br /&gt;
|-&lt;br /&gt;
|arch&lt;br /&gt;
|a&lt;br /&gt;
|RESTRING&lt;br /&gt;
|==&lt;br /&gt;
|YES&lt;br /&gt;
|NO&lt;br /&gt;
|NONE&lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|avx&lt;br /&gt;
|avx&lt;br /&gt;
|BOOL        &lt;br /&gt;
|==     &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|FALSE    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|calendar            &lt;br /&gt;
|c          &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cpu                 &lt;br /&gt;
|cpu        &lt;br /&gt;
|DOUBLE      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cpu_flags           &lt;br /&gt;
|c_f        &lt;br /&gt;
|STRING      &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cuda                &lt;br /&gt;
|cuda       &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|JOB        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|display_win_gui     &lt;br /&gt;
|dwg        &lt;br /&gt;
|BOOL        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|exclusive           &lt;br /&gt;
|excl       &lt;br /&gt;
|BOOL        &lt;br /&gt;
|EXCL    &lt;br /&gt;
|YES         &lt;br /&gt;
|YES        &lt;br /&gt;
|0        &lt;br /&gt;
|1000&lt;br /&gt;
|-&lt;br /&gt;
|h_core              &lt;br /&gt;
|h_core     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_cpu               &lt;br /&gt;
|h_cpu      &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_data              &lt;br /&gt;
|h_data     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_fsize             &lt;br /&gt;
|h_fsize    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_rss               &lt;br /&gt;
|h_rss      &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_rt                &lt;br /&gt;
|h_rt       &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|FORCED      &lt;br /&gt;
|NO        &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_stack             &lt;br /&gt;
|h_stack    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_vmem              &lt;br /&gt;
|h_vmem     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|hostname            &lt;br /&gt;
|h          &lt;br /&gt;
|HOST        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|infiniband          &lt;br /&gt;
|ib         &lt;br /&gt;
|BOOL        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|FALSE    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_core              &lt;br /&gt;
|core       &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_socket            &lt;br /&gt;
|socket     &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_thread            &lt;br /&gt;
|thread     &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_topology          &lt;br /&gt;
|topo       &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_topology_inuse    &lt;br /&gt;
|utopo      &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_free            &lt;br /&gt;
|mf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_total           &lt;br /&gt;
|mt         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_used            &lt;br /&gt;
|mu         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0       &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|memory              &lt;br /&gt;
|mem        &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|FORCED      &lt;br /&gt;
|YES        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|num_proc            &lt;br /&gt;
|p          &lt;br /&gt;
|INT         &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|qname               &lt;br /&gt;
|q          &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_core              &lt;br /&gt;
|s_core     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_cpu               &lt;br /&gt;
|s_cpu      &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_data              &lt;br /&gt;
|s_data     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_fsize             &lt;br /&gt;
|s_fsize    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_rss               &lt;br /&gt;
|s_rss      &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_rt                &lt;br /&gt;
|s_rt       &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_stack             &lt;br /&gt;
|s_stack    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_vmem              &lt;br /&gt;
|s_vmem     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|slots               &lt;br /&gt;
|s          &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|YES        &lt;br /&gt;
|1        &lt;br /&gt;
|1000&lt;br /&gt;
|-&lt;br /&gt;
|swap_free           &lt;br /&gt;
|sf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_rate           &lt;br /&gt;
|sr         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_rsvd           &lt;br /&gt;
|srsv       &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_total          &lt;br /&gt;
|st         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_used           &lt;br /&gt;
|su         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_free        &lt;br /&gt;
|vf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_total       &lt;br /&gt;
|vt         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_used        &lt;br /&gt;
|vu         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The good news is that most of these nobody ever uses. There are a couple of exceptions, though:&lt;br /&gt;
=== Infiniband ===&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. Infiniband does absolutely no good if running in a 'single' parallel environment. Infiniband is a high-speed host-to-host communication fabric. It is used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested Infiniband, and all the nodes with Infiniband were currently busy. In fact, some of our fastest nodes do not have Infiniband, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;-l ib=true&amp;lt;/tt&amp;gt; to your qsub command-line.&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. We have a very small number of nodes which have GPUs installed. To request one of these nodes, add &amp;lt;tt&amp;gt;-l cuda=true&amp;lt;/tt&amp;gt; to your qsub command-line.&lt;br /&gt;
=== Exclusive ===&lt;br /&gt;
Some programs just don't play nicely with others. They will attempt to use all available memory or will try to use all the cores it can use. The way to be a nice neighbor if your program has this problem is to request exclusive use of a node with &amp;lt;tt&amp;gt;-l excl=true&amp;lt;/tt&amp;gt;. This can also be useful for benchmarking, where you can be sure that no other jobs are interfering with yours.&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
Intranode jobs are easier to code and can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or Java's threads. Many times, your program will need to know how many cores you want it to use. Many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the qsub directive '&amp;lt;tt&amp;gt;-pe single ''n''&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $nslots to tell how many cores you've been allocated.&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
&amp;quot;Talking&amp;quot; between nodes is trickier that talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;-pe single ''n''&amp;lt;/tt&amp;gt;' for your qsub request, you will use one of the following:&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Parallel Environment !! Description&lt;br /&gt;
|-&lt;br /&gt;
|mpi-fill&lt;br /&gt;
|This environment will use as many slots on each node as it can until it reaches the number of cores you have requested.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-spread&lt;br /&gt;
|This environment will spread itself out over as many nodes as possible until it reaches the number of cores you have requested.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-1&lt;br /&gt;
|This environment will allocate the slots you've requested 1 per node.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-2&lt;br /&gt;
|This environment will allocate the slots you've requested 2 per node. You must request cores as a multiple of 2&lt;br /&gt;
|-&lt;br /&gt;
|mpi-4&lt;br /&gt;
|This environment will allocate the slots you've requested 4 per node. You must request cores as a multiple of 4&lt;br /&gt;
|-&lt;br /&gt;
|mpi-8&lt;br /&gt;
|This environment will allocate the slots you've requested 8 per node. You must request cores as a multiple of 8&lt;br /&gt;
|-&lt;br /&gt;
|mpi-10&lt;br /&gt;
|This environment will allocate the slots you've requested 10 per node. You must request cores as a multiple of 10&lt;br /&gt;
|-&lt;br /&gt;
|mpi-12&lt;br /&gt;
|This environment will allocate the slots you've requested 12 per node. You must request cores as a multiple of 12&lt;br /&gt;
|-&lt;br /&gt;
|mpi-16&lt;br /&gt;
|This environment will allocate the slots you've requested 16 per node. You must request cores as a multiple of 16&lt;br /&gt;
|-&lt;br /&gt;
|mpi-80&lt;br /&gt;
|This environment will allocate the slots you've requested 80 per node. You must request cores as a multiple of 80&lt;br /&gt;
|}&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-4 16&amp;lt;/tt&amp;gt; will give you 4 chunks of 4 cores apiece. They might all happen to be allocated on the same node (16 cores), on 4 different nodes (4 cores each), on 3 nodes (8 cores on one and 4 cores on the other two), or on 2 nodes (8 cores each).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-fill 40&amp;lt;/tt&amp;gt; will give you 40 cores, but will attempt to get them all on the same node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-fill 100&amp;lt;/tt&amp;gt; will give you 100 cores, and place them on as few nodes as possible. In this case it's likely you would get a full mage (80 cores) and either part of another mage (the remaining 20 cores) or one of the 20-core elves.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-spread 40&amp;lt;/tt&amp;gt; will give you 40 cores, and will attempt to place each on a separate node.&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
All memory requests are '''per core'''. One of the more common scenarios is where somebody will need, say 20 cores and 400 GB of memory. So they will make a request like '&amp;lt;tt&amp;gt;-pe single 20, -l mem=400G&amp;lt;/tt&amp;gt;' This will never run, because what you are really requesting is 20 cores and 8000GB of memory (20 * 400). Since we have no nodes with 8000 terabytes of memory, the job will never run. In this case, you will divide the 400GB total memory request by the number of cores (20), so the correct command would be '&amp;lt;tt&amp;gt;-pe single 20, -l mem=20G&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
== Other Handy SGE Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have SGE email you when a job changes its status. This takes two directives to qsub: '&amp;lt;tt&amp;gt;-M ''someone@somewhere.com''&amp;lt;/tt&amp;gt;' will give the email address to which to send status updates. '&amp;lt;tt&amp;gt;-m abe&amp;lt;/tt&amp;gt;' is probably the most common directive given for ''when'' to send updates. This will send email messages when a job (a)borts, (b)egins, or (e)nds. Other possibilities are (s)uspended and (n)ever.&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-N ''JobName''&amp;lt;/tt&amp;gt;' qsub directive.&lt;br /&gt;
=== Combining Output Streams ===&lt;br /&gt;
Normally, SGE will create two files for output. One will be .e''jobnumber'' and the other .o''jobnumber''. If you want both of these to be combined into a single file, you can use the qsub directive '&amp;lt;tt&amp;gt;-j y&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
=== SGE Environment Variables ===&lt;br /&gt;
Within an actual job, sometimes you need to know specific things about the running environment to setup your scripts correctly. Here is a listing of environment variables that SGE makes available to you. Of course the value of these variables will be different based on many different factors.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
HOSTNAME=titan1.beocat&lt;br /&gt;
SGE_TASK_STEPSIZE=undefined&lt;br /&gt;
SGE_INFOTEXT_MAX_COLUMN=5000&lt;br /&gt;
SHELL=/usr/local/bin/sh&lt;br /&gt;
NHOSTS=2&lt;br /&gt;
SGE_O_WORKDIR=/homes/mozes&lt;br /&gt;
TMPDIR=/tmp/105.1.batch.q&lt;br /&gt;
SGE_O_HOME=/homes/mozes&lt;br /&gt;
SGE_ARCH=lx24-amd64&lt;br /&gt;
SGE_CELL=default&lt;br /&gt;
RESTARTED=0&lt;br /&gt;
ARC=lx24-amd64&lt;br /&gt;
USER=mozes&lt;br /&gt;
QUEUE=batch.q&lt;br /&gt;
PVM_ARCH=LINUX64&lt;br /&gt;
SGE_TASK_ID=undefined&lt;br /&gt;
SGE_BINARY_PATH=/opt/sge/bin/lx24-amd64&lt;br /&gt;
SGE_STDERR_PATH=/homes/mozes/sge_test.sub.e105&lt;br /&gt;
SGE_STDOUT_PATH=/homes/mozes/sge_test.sub.o105&lt;br /&gt;
SGE_ACCOUNT=sge&lt;br /&gt;
SGE_RSH_COMMAND=builtin&lt;br /&gt;
JOB_SCRIPT=/opt/sge/default/spool/titan1/job_scripts/105&lt;br /&gt;
JOB_NAME=sge_test.sub&lt;br /&gt;
SGE_NOMSG=1&lt;br /&gt;
SGE_ROOT=/opt/sge&lt;br /&gt;
REQNAME=sge_test.sub&lt;br /&gt;
SGE_JOB_SPOOL_DIR=/opt/sge/default/spool/titan1/active_jobs/105.1&lt;br /&gt;
ENVIRONMENT=BATCH&lt;br /&gt;
PE_HOSTFILE=/opt/sge/default/spool/titan1/active_jobs/105.1/pe_hostfile&lt;br /&gt;
SGE_CWD_PATH=/homes/mozes&lt;br /&gt;
NQUEUES=2&lt;br /&gt;
SGE_O_LOGNAME=mozes&lt;br /&gt;
SGE_O_MAIL=/var/mail/mozes&lt;br /&gt;
TMP=/tmp/105.1.batch.q&lt;br /&gt;
JOB_ID=105&lt;br /&gt;
LOGNAME=mozes&lt;br /&gt;
PE=mpi-fill&lt;br /&gt;
SGE_TASK_FIRST=undefined&lt;br /&gt;
SGE_O_HOST=loki&lt;br /&gt;
SGE_O_SHELL=/bin/bash&lt;br /&gt;
SGE_CLUSTER_NAME=beocat&lt;br /&gt;
REQUEST=sge_test.sub&lt;br /&gt;
NSLOTS=32&lt;br /&gt;
SGE_STDIN_PATH=/dev/null&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Sometimes it is nice to know what hosts you have access to during a PE job. You would checkout the PE_HOSTFILE to know that. If your job has been restarted, it is nice to be able to change what happens rather than redoing all of your work. If this is the case, RESTARTED would equal 1. There are lots of useful Environment Variables there, I will leave it to you to identify the ones you want.&lt;br /&gt;
&lt;br /&gt;
Some of the most commonly-used variables we see used are $NSLOTS, $HOSTNAME, and $SGE_TASK_ID (used for array jobs, discussed below).&lt;br /&gt;
== Running from a qsub Submit Script ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'qsub -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample qsub script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of qsub, lines beginning with #$ are commands for qsub itself, so&lt;br /&gt;
## I have taken the convention here of starting *every* line with a '#', just&lt;br /&gt;
## Delete the first one if you want to use that line, and then modify it to&lt;br /&gt;
## your own purposes. The only exception here is the first line, which *must*&lt;br /&gt;
## be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##$ -l mem=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime. Default is 1 hour (1:00:00)&lt;br /&gt;
##$ -l h_rt=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it. Default is &amp;quot;FALSE&amp;quot;&lt;br /&gt;
##$ ib=TRUE&lt;br /&gt;
&lt;br /&gt;
## CUDA directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
## Default is &amp;quot;FALSE&amp;quot;&lt;br /&gt;
##$ -l cuda=TRUE&lt;br /&gt;
&lt;br /&gt;
## Parallel environment. Syntax is '-pe Environment NumberOfCores' A list of&lt;br /&gt;
## valid environments can be found at&lt;br /&gt;
## http://support.cis.ksu.edu/BeocatDocs/SunGridEngine (section 3.2). One&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocat@cis.ksu.edu to see how we can assist in&lt;br /&gt;
## getting your job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
## &amp;quot;single 1&amp;quot;&lt;br /&gt;
##$ -pe single 12&lt;br /&gt;
##$ -pe mpi-1 2&lt;br /&gt;
##$ -pe mpi-fill 20&lt;br /&gt;
##$ -pe mpi-spread 16&lt;br /&gt;
&lt;br /&gt;
## Checkpointing. Options are BLCR or dmtcp. Default is no checkpointing.&lt;br /&gt;
##$ -ckpt dmtcp&lt;br /&gt;
&lt;br /&gt;
## Use the current working directory instead of your home directory&lt;br /&gt;
##$ -cwd&lt;br /&gt;
&lt;br /&gt;
## Merge output and error text streams into a single stream&lt;br /&gt;
##$ -j y&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##$ -N MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&lt;br /&gt;
## Send email when a job is aborted (a), begins (b), and/or ends (e)&lt;br /&gt;
##$ -m abe&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
##$ -M myemail@ksu.edu&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of SGE's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to qsub.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  -t n[-m[:s]]&lt;br /&gt;
     Submits  a  so  called  Array  Job,  i.e. an array of identical tasks being differentiated only by an index number and being treated by  Grid&lt;br /&gt;
     Engine almost like a series of jobs. The option argument to -t specifies the number of array job tasks and the index  number  which  will  be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SGE_TASK_ID. The option arguments&lt;br /&gt;
     n, m and s will be available through the environment variables SGE_TASK_FIRST, SGE_TASK_LAST and  SGE_TASK_STEPSIZE.&lt;br /&gt;
 &lt;br /&gt;
     Following restrictions apply to the values n and m:&lt;br /&gt;
 &lt;br /&gt;
            1 &amp;lt;= n &amp;lt;= 1,000,000&lt;br /&gt;
            1 &amp;lt;= m &amp;lt;= 1,000,000&lt;br /&gt;
            n &amp;lt;= m&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or  a  range  with  a  step  size.&lt;br /&gt;
     Hence,  the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SGE_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array  jobs  are  commonly  used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks will be written into different files with the default location&lt;br /&gt;
 &lt;br /&gt;
     &amp;lt;jobname&amp;gt;.['e'|'o']&amp;lt;job_id&amp;gt;'.'&amp;lt;task_id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#$ -t 50:200:50&lt;br /&gt;
RUNSIZE=$SGE_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and SGE understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     qsub ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as qsub has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#$ -t 1:5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SGE_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SGE_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so qsub doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'qrsh'. qrsh uses the exact same command-line arguments as qsub. If no node is available with your resource requirements, qrsh will tell you&lt;br /&gt;
 Your &amp;quot;qrsh&amp;quot; request could not be scheduled, try again later.&lt;br /&gt;
Note that, like qsub, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The qacct tool will read SGE's accounting file and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== qacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
qacct -j 1122334455&lt;br /&gt;
# if you don't know the job id, you can look at your jobs over some number of days in this case the past 14 days:&lt;br /&gt;
qacct -o $USER -d 14 -j&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
 &amp;lt;tt&amp;gt;qname        batch.q             &lt;br /&gt;
 hostname     mage07.beocat       &lt;br /&gt;
 group        some_user_users        &lt;br /&gt;
 owner        some_user              &lt;br /&gt;
 project      BEODEFAULT          &lt;br /&gt;
 department   defaultdepartment   &lt;br /&gt;
 jobname      my_job_script.sh  &lt;br /&gt;
 jobnumber    1122334455          &lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 exit_status  1                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;ru_wallclock 1s&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;ru_utime     0.030s&lt;br /&gt;
 ru_stime     0.030s&lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 arid         undefined&lt;br /&gt;
 category     -u some_user -q batch.q,long.q -l h_rt=604800,mem_free=1024.0M,memory=2G&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you look at the line showing ru_wallclock. You can see that it shows 1s. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
 &amp;lt;tt&amp;gt;qname        batch.q             &lt;br /&gt;
 hostname     scout59.beocat      &lt;br /&gt;
 group        some_user_users     &lt;br /&gt;
 owner        some_user           &lt;br /&gt;
 project      BEODEFAULT          &lt;br /&gt;
 department   defaultdepartment   &lt;br /&gt;
 jobname      my_job_script.sh           &lt;br /&gt;
 jobnumber    1122334455            &lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...            &lt;br /&gt;
 slots        1                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;failed       37  : qmaster enforced h_rt, h_cpu, or h_vmem limit&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;exit_status  0                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;ru_wallclock 21600s&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;ru_utime     0.130s&lt;br /&gt;
 ru_stime     0.020s&lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 arid         undefined&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;category     -u some_user -q batch.q,long.q -l h_rt=21600,mem_free=512.0M,memory=1G&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you look at the lines showing failed, ru_wallclock and category we can see some pointers to the issue.&lt;br /&gt;
It didn't finish because the scheduler (qmaster) enforced some limit. If you look at the category line, the only limit requested was h_rt. So it was a runtime (wallclock) limit.&lt;br /&gt;
Comparing ru_wallclock and the h_rt request, we can see that it ran until the h_rt time was hit, and then the scheduler enforce the limit and killed the job. You will need to resubmit the job and ask for more time next time.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=CUDA&amp;diff=71</id>
		<title>CUDA</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=CUDA&amp;diff=71"/>
		<updated>2014-07-09T14:22:28Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Created page with &amp;quot;== CUDA Overview == CUDA is a feature set for programming nVidia GPUs. We have 16 nodes with nVidia Tesla m2050 GPUs....&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== CUDA Overview ==&lt;br /&gt;
[[wikipedia:CUDA|CUDA]] is a feature set for programming nVidia [[wikipedia:Graphics_processing_unit|GPUs]]. We have 16 nodes with nVidia Tesla m2050 GPUs. These GPUs have 448 cores running at 1.15 GHz, and are very fast at floating point math - over a TeraFLOP! However, programming in CUDA is difficult for the uninitiated.&lt;br /&gt;
&lt;br /&gt;
== Training videos ==&lt;br /&gt;
CUDA Programming Model Overview: [http://www.youtube.com/watch?v=aveYOlBSe-Y http://www.youtube.com/watch?v=aveYOlBSe-Y]&lt;br /&gt;
&lt;br /&gt;
CUDA Programming Basics Part I (Host functions): [http://www.youtube.com/watch?v=79VARRFwQgY http://www.youtube.com/watch?v=79VARRFwQgY]&lt;br /&gt;
&lt;br /&gt;
CUDA Programming Basics Part II (Device functions): [http://www.youtube.com/watch?v=G5-iI1ogDW4 http://www.youtube.com/watch?v=G5-iI1ogDW4]&lt;br /&gt;
&lt;br /&gt;
== Compiling CUDA Applications ==&lt;br /&gt;
nvcc is the compiler for CUDA applications. When compiling your applications manually you will need to keep 3 things in mind:&lt;br /&gt;
&lt;br /&gt;
* The CUDA development headers are located here: /opt/cuda/sdk/C/common/inc&lt;br /&gt;
* The CUDA architecture is: sm_20&lt;br /&gt;
* The CUDA SDK is currently not available on the headnode. (compile on the nodes with CUDA, either in your jobscript or via &amp;lt;tt&amp;gt;qrsh -l cuda=TRUE&amp;lt;/tt&amp;gt;)&lt;br /&gt;
* '''Do not run your cuda applications on the headnode. I cannot guarantee it will run, and it will give you terrible results if it does run.'''&lt;br /&gt;
&lt;br /&gt;
Putting it all together you can compile CUDA applications as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nvcc -I /opt/cuda/sdk/C/common/inc -arch sm_20 &amp;lt;source&amp;gt;.cu -o &amp;lt;output&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
== Example ==&lt;br /&gt;
=== Create your Application ===&lt;br /&gt;
Copy the following Application into Beocat as vecadd.cu&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c&amp;quot;&amp;gt;&lt;br /&gt;
//  Kernel definition, see also section 4.2.3 of Nvidia Cuda Programming Guide&lt;br /&gt;
__global__  void vecAdd(float* A, float* B, float* C)&lt;br /&gt;
{&lt;br /&gt;
            // threadIdx.x is a built-in variable  provided by CUDA at runtime&lt;br /&gt;
            int i = threadIdx.x;&lt;br /&gt;
       A[i]=0;&lt;br /&gt;
       B[i]=i;&lt;br /&gt;
       C[i] = A[i] + B[i];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
#include  &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#define  SIZE 10&lt;br /&gt;
int  main()&lt;br /&gt;
{&lt;br /&gt;
   int N=SIZE;&lt;br /&gt;
   float A[SIZE], B[SIZE], C[SIZE];&lt;br /&gt;
   float *devPtrA;&lt;br /&gt;
   float *devPtrB;&lt;br /&gt;
   float *devPtrC;&lt;br /&gt;
   int memsize= SIZE * sizeof(float);&lt;br /&gt;
&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrA, memsize);&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrB, memsize);&lt;br /&gt;
   cudaMalloc((void**)&amp;amp;devPtrC, memsize);&lt;br /&gt;
   cudaMemcpy(devPtrA, A, memsize,  cudaMemcpyHostToDevice);&lt;br /&gt;
   cudaMemcpy(devPtrB, B, memsize,  cudaMemcpyHostToDevice);&lt;br /&gt;
   // __global__ functions are called:  Func&amp;lt;&amp;lt;&amp;lt; Dg, Db, Ns  &amp;gt;&amp;gt;&amp;gt;(parameter);&lt;br /&gt;
   vecAdd&amp;lt;&amp;lt;&amp;lt;1, N&amp;gt;&amp;gt;&amp;gt;(devPtrA,  devPtrB, devPtrC);&lt;br /&gt;
   cudaMemcpy(C, devPtrC, memsize,  cudaMemcpyDeviceToHost);&lt;br /&gt;
&lt;br /&gt;
   for (int i=0; i&amp;lt;SIZE; i++)&lt;br /&gt;
        printf(&amp;quot;C[%d]=%f\n&amp;quot;,i,C[i]);&lt;br /&gt;
&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
  cudaFree(devPtrA);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== Gain Access to a CUDA-capable Node ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
qrsh -l cuda=TRUE&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
=== Compile Your Application ===&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
nvcc -I /opt/cuda/sdk/C/common/inc -arch sm_20 vecadd.cu -o vecadd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This will create a program with the name 'vecadd' (specified by the '-o' flag).&lt;br /&gt;
=== Run Your Application ===&lt;br /&gt;
Run the program as you usually would, namely&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
./vecadd&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Assuming you don't want to run the program interactively because this is a large job, you can submit a job via qsub, just be sure to add the '&amp;lt;tt&amp;gt;-l cuda=true&amp;lt;/tt&amp;gt;' directive.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OLD_DEPRECATED_AdvancedSGE&amp;diff=70</id>
		<title>OLD DEPRECATED AdvancedSGE</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OLD_DEPRECATED_AdvancedSGE&amp;diff=70"/>
		<updated>2014-07-08T21:46:48Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: SGE Features and Array Jobs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SGEBasics]] page, we have several other requestable resources. Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;qconf -sc | awk '{ if ($5 != &amp;quot;NO&amp;quot;) { print }}'&amp;lt;/tt&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
!name&lt;br /&gt;
!shortcut&lt;br /&gt;
!type&lt;br /&gt;
!relop&lt;br /&gt;
!requestable&lt;br /&gt;
!consumable&lt;br /&gt;
!default&lt;br /&gt;
!urgency&lt;br /&gt;
|-&lt;br /&gt;
|arch&lt;br /&gt;
|a&lt;br /&gt;
|RESTRING&lt;br /&gt;
|==&lt;br /&gt;
|YES&lt;br /&gt;
|NO&lt;br /&gt;
|NONE&lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|avx&lt;br /&gt;
|avx&lt;br /&gt;
|BOOL        &lt;br /&gt;
|==     &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|FALSE    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|calendar            &lt;br /&gt;
|c          &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cpu                 &lt;br /&gt;
|cpu        &lt;br /&gt;
|DOUBLE      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cpu_flags           &lt;br /&gt;
|c_f        &lt;br /&gt;
|STRING      &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cuda                &lt;br /&gt;
|cuda       &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|JOB        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|display_win_gui     &lt;br /&gt;
|dwg        &lt;br /&gt;
|BOOL        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|exclusive           &lt;br /&gt;
|excl       &lt;br /&gt;
|BOOL        &lt;br /&gt;
|EXCL    &lt;br /&gt;
|YES         &lt;br /&gt;
|YES        &lt;br /&gt;
|0        &lt;br /&gt;
|1000&lt;br /&gt;
|-&lt;br /&gt;
|h_core              &lt;br /&gt;
|h_core     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_cpu               &lt;br /&gt;
|h_cpu      &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_data              &lt;br /&gt;
|h_data     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_fsize             &lt;br /&gt;
|h_fsize    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_rss               &lt;br /&gt;
|h_rss      &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_rt                &lt;br /&gt;
|h_rt       &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|FORCED      &lt;br /&gt;
|NO        &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_stack             &lt;br /&gt;
|h_stack    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_vmem              &lt;br /&gt;
|h_vmem     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|hostname            &lt;br /&gt;
|h          &lt;br /&gt;
|HOST        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|infiniband          &lt;br /&gt;
|ib         &lt;br /&gt;
|BOOL        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|FALSE    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_core              &lt;br /&gt;
|core       &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_socket            &lt;br /&gt;
|socket     &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_thread            &lt;br /&gt;
|thread     &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_topology          &lt;br /&gt;
|topo       &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_topology_inuse    &lt;br /&gt;
|utopo      &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_free            &lt;br /&gt;
|mf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_total           &lt;br /&gt;
|mt         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_used            &lt;br /&gt;
|mu         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0       &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|memory              &lt;br /&gt;
|mem        &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|FORCED      &lt;br /&gt;
|YES        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|num_proc            &lt;br /&gt;
|p          &lt;br /&gt;
|INT         &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|qname               &lt;br /&gt;
|q          &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_core              &lt;br /&gt;
|s_core     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_cpu               &lt;br /&gt;
|s_cpu      &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_data              &lt;br /&gt;
|s_data     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_fsize             &lt;br /&gt;
|s_fsize    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_rss               &lt;br /&gt;
|s_rss      &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_rt                &lt;br /&gt;
|s_rt       &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_stack             &lt;br /&gt;
|s_stack    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_vmem              &lt;br /&gt;
|s_vmem     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|slots               &lt;br /&gt;
|s          &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|YES        &lt;br /&gt;
|1        &lt;br /&gt;
|1000&lt;br /&gt;
|-&lt;br /&gt;
|swap_free           &lt;br /&gt;
|sf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_rate           &lt;br /&gt;
|sr         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_rsvd           &lt;br /&gt;
|srsv       &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_total          &lt;br /&gt;
|st         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_used           &lt;br /&gt;
|su         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_free        &lt;br /&gt;
|vf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_total       &lt;br /&gt;
|vt         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_used        &lt;br /&gt;
|vu         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The good news is that most of these nobody ever uses. There are a couple of exceptions, though:&lt;br /&gt;
=== Infiniband ===&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. Infiniband does absolutely no good if running in a 'single' parallel environment. Infiniband is a high-speed host-to-host communication fabric. It is used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested Infiniband, and all the nodes with Infiniband were currently busy. In fact, some of our fastest nodes do not have Infiniband, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;-l ib=true&amp;lt;/tt&amp;gt; to your qsub command-line.&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. We have a very small number of nodes which have GPUs installed. To request one of these nodes, add &amp;lt;tt&amp;gt;-l cuda=true&amp;lt;/tt&amp;gt; to your qsub command-line.&lt;br /&gt;
=== Exclusive ===&lt;br /&gt;
Some programs just don't play nicely with others. They will attempt to use all available memory or will try to use all the cores it can use. The way to be a nice neighbor if your program has this problem is to request exclusive use of a node with &amp;lt;tt&amp;gt;-l excl=true&amp;lt;/tt&amp;gt;. This can also be useful for benchmarking, where you can be sure that no other jobs are interfering with yours.&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
Intranode jobs are easier to code and can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or Java's threads. Many times, your program will need to know how many cores you want it to use. Many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the qsub directive '&amp;lt;tt&amp;gt;-pe single ''n''&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $nslots to tell how many cores you've been allocated.&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
&amp;quot;Talking&amp;quot; between nodes is trickier that talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;-pe single ''n''&amp;lt;/tt&amp;gt;' for your qsub request, you will use one of the following:&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
! Parallel Environment !! Description&lt;br /&gt;
|-&lt;br /&gt;
|mpi-fill&lt;br /&gt;
|This environment will use as many slots on each node as it can until it reaches the number of cores you have requested.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-spread&lt;br /&gt;
|This environment will spread itself out over as many nodes as possible until it reaches the number of cores you have requested.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-1&lt;br /&gt;
|This environment will allocate the slots you've requested 1 per node.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-2&lt;br /&gt;
|This environment will allocate the slots you've requested 2 per node. You must request cores as a multiple of 2&lt;br /&gt;
|-&lt;br /&gt;
|mpi-4&lt;br /&gt;
|This environment will allocate the slots you've requested 4 per node. You must request cores as a multiple of 4&lt;br /&gt;
|-&lt;br /&gt;
|mpi-8&lt;br /&gt;
|This environment will allocate the slots you've requested 8 per node. You must request cores as a multiple of 8&lt;br /&gt;
|-&lt;br /&gt;
|mpi-10&lt;br /&gt;
|This environment will allocate the slots you've requested 10 per node. You must request cores as a multiple of 10&lt;br /&gt;
|-&lt;br /&gt;
|mpi-12&lt;br /&gt;
|This environment will allocate the slots you've requested 12 per node. You must request cores as a multiple of 12&lt;br /&gt;
|-&lt;br /&gt;
|mpi-16&lt;br /&gt;
|This environment will allocate the slots you've requested 16 per node. You must request cores as a multiple of 16&lt;br /&gt;
|-&lt;br /&gt;
|mpi-80&lt;br /&gt;
|This environment will allocate the slots you've requested 80 per node. You must request cores as a multiple of 80&lt;br /&gt;
|}&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-4 16&amp;lt;/tt&amp;gt; will give you 4 chunks of 4 cores apiece. They might all happen to be allocated on the same node (16 cores), on 4 different nodes (4 cores each), on 3 nodes (8 cores on one and 4 cores on the other two), or on 2 nodes (8 cores each).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-fill 40&amp;lt;/tt&amp;gt; will give you 40 cores, but will attempt to get them all on the same node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-fill 100&amp;lt;/tt&amp;gt; will give you 100 cores, and place them on as few nodes as possible. In this case it's likely you would get a full mage (80 cores) and either part of another mage (the remaining 20 cores) or one of the 20-core elves.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-spread 40&amp;lt;/tt&amp;gt; will give you 40 cores, and will attempt to place each on a separate node.&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
All memory requests are '''per core'''. One of the more common scenarios is where somebody will need, say 20 cores and 400 GB of memory. So they will make a request like '&amp;lt;tt&amp;gt;-pe single 20, -l mem=400G&amp;lt;/tt&amp;gt;' This will never run, because what you are really requesting is 20 cores and 8000GB of memory (20 * 400). Since we have no nodes with 8000 terabytes of memory, the job will never run. In this case, you will divide the 400GB total memory request by the number of cores (20), so the correct command would be '&amp;lt;tt&amp;gt;-pe single 20, -l mem=20G&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
== Other Handy SGE Features ==&lt;br /&gt;
=== Email status changes ===&lt;br /&gt;
One of the most commonly used options when submitting jobs not related to resource requests is to have have SGE email you when a job changes its status. This takes two directives to qsub: '&amp;lt;tt&amp;gt;-M ''someone@somewhere.com''&amp;lt;/tt&amp;gt;' will give the email address to which to send status updates. '&amp;lt;tt&amp;gt;-m abe&amp;lt;/tt&amp;gt;' is probably the most common directive given for ''when'' to send updates. This will send email messages when a job (a)borts, (b)egins, or (e)nds. Other possibilities are (s)uspended and (n)ever.&lt;br /&gt;
=== Job Naming ===&lt;br /&gt;
If you have several jobs in the queue, running the same script with different parameters, it's handy to have a different name for each job as it shows up in the queue. This is accomplished with the '&amp;lt;tt&amp;gt;-N ''JobName''&amp;lt;/tt&amp;gt;' qsub directive.&lt;br /&gt;
=== Combining Output Streams ===&lt;br /&gt;
Normally, SGE will create two files for output. One will be .e''jobnumber'' and the other .o''jobnumber''. If you want both of these to be combined into a single file, you can use the qsub directive '&amp;lt;tt&amp;gt;-j y&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
=== Running from the Current Directory ===&lt;br /&gt;
By default, jobs run from your home directory. Many programs incorrectly assume that you are running the script from the current directory. You can use the '&amp;lt;tt&amp;gt;-cwd&amp;lt;/tt&amp;gt;' directive to change to the &amp;quot;current working directory&amp;quot; you used when submitting the job.&lt;br /&gt;
== Running from a qsub Submit Script&amp;quot; ==&lt;br /&gt;
No doubt after you've run a few jobs you get tired of typing something like 'qsub -l mem=2G,h_rt=10:00 -pe single 8 -n MyJobTitle MyScript.sh'. How are you supposed to remember all of these every time? The answer is to create a 'submit script', which outlines all of these for you. Below is a sample submit script, which you can modify and use for your own purposes.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
&lt;br /&gt;
## A Sample qsub script created by Kyle Hutson&lt;br /&gt;
##&lt;br /&gt;
## Note: Usually a '#&amp;quot; at the beginning of the line is ignored. However, in&lt;br /&gt;
## the case of qsub, lines beginning with #$ are commands for qsub itself, so&lt;br /&gt;
## I have taken the convention here of starting *every* line with a '#', just&lt;br /&gt;
## Delete the first one if you want to use that line, and then modify it to&lt;br /&gt;
## your own purposes. The only exception here is the first line, which *must*&lt;br /&gt;
## be #!/bin/bash (or another valid shell).&lt;br /&gt;
&lt;br /&gt;
## Specify the amount of RAM needed _per_core_. Default is 1G&lt;br /&gt;
##$ -l mem=1G&lt;br /&gt;
&lt;br /&gt;
## Specify the maximum runtime. Default is 1 hour (1:00:00)&lt;br /&gt;
##$ -l h_rt=1:00:00&lt;br /&gt;
&lt;br /&gt;
## Require the use of infiniband. If you don't know what this is, you probably&lt;br /&gt;
## don't need it. Default is &amp;quot;FALSE&amp;quot;&lt;br /&gt;
##$ ib=TRUE&lt;br /&gt;
&lt;br /&gt;
## CUDA directive. If You don't know what this is, you probably don't need it&lt;br /&gt;
## Default is &amp;quot;FALSE&amp;quot;&lt;br /&gt;
##$ -l cuda=TRUE&lt;br /&gt;
&lt;br /&gt;
## Parallel environment. Syntax is '-pe Environment NumberOfCores' A list of&lt;br /&gt;
## valid environments can be found at&lt;br /&gt;
## http://support.cis.ksu.edu/BeocatDocs/SunGridEngine (section 3.2). One&lt;br /&gt;
## quick note here. Jobs requesting 16 or fewer cores tend to get scheduled&lt;br /&gt;
## fairly quickly. If you need a job that requires more than that, you might&lt;br /&gt;
## benefit from emailing us at beocat@cis.ksu.edu to see how we can assist in&lt;br /&gt;
## getting your job scheduled in a reasonable amount of time. Default is&lt;br /&gt;
## &amp;quot;single 1&amp;quot;&lt;br /&gt;
##$ -pe single 12&lt;br /&gt;
##$ -pe mpi-1 2&lt;br /&gt;
##$ -pe mpi-fill 20&lt;br /&gt;
##$ -pe mpi-spread 16&lt;br /&gt;
&lt;br /&gt;
## Checkpointing. Options are BLCR or dmtcp. Default is no checkpointing.&lt;br /&gt;
##$ -ckpt dmtcp&lt;br /&gt;
&lt;br /&gt;
## Use the current working directory instead of your home directory&lt;br /&gt;
##$ -cwd&lt;br /&gt;
&lt;br /&gt;
## Merge output and error text streams into a single stream&lt;br /&gt;
##$ -j y&lt;br /&gt;
&lt;br /&gt;
## Name my job, to make it easier to find in the queue&lt;br /&gt;
##$ -N MyJobTitle&lt;br /&gt;
&lt;br /&gt;
## And finally, we run the job we came here to do.&lt;br /&gt;
## $HOME/ProgramDir/ProgramName ProgramArguments&lt;br /&gt;
&lt;br /&gt;
## OR, for the case of MPI-capable jobs&lt;br /&gt;
## mpirun $HOME/path/MpiJobName&lt;br /&gt;
&lt;br /&gt;
## Send email when a job is aborted (a), begins (b), and/or ends (e)&lt;br /&gt;
##$ -m abe&lt;br /&gt;
&lt;br /&gt;
## Email address to send the email to based on the above line.&lt;br /&gt;
##$ -M myemail@ksu.edu&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
== Array Jobs ==&lt;br /&gt;
One of SGE's useful options is the ability to run &amp;quot;Array Jobs&amp;quot;&lt;br /&gt;
&lt;br /&gt;
It can be used with the following option to qsub.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
  -t n[-m[:s]]&lt;br /&gt;
     Submits  a  so  called  Array  Job,  i.e. an array of identical tasks being differentiated only by an index number and being treated by  Grid&lt;br /&gt;
     Engine almost like a series of jobs. The option argument to -t specifies the number of array job tasks and the index  number  which  will  be&lt;br /&gt;
     associated with the tasks. The index numbers will be exported to the job tasks via the environment variable SGE_TASK_ID. The option arguments&lt;br /&gt;
     n, m and s will be available through the environment variables SGE_TASK_FIRST, SGE_TASK_LAST and  SGE_TASK_STEPSIZE.&lt;br /&gt;
 &lt;br /&gt;
     Following restrictions apply to the values n and m:&lt;br /&gt;
 &lt;br /&gt;
            1 &amp;lt;= n &amp;lt;= 1,000,000&lt;br /&gt;
            1 &amp;lt;= m &amp;lt;= 1,000,000&lt;br /&gt;
            n &amp;lt;= m&lt;br /&gt;
 &lt;br /&gt;
     The task id range specified in the option argument may be a single number, a simple range of the form n-m or  a  range  with  a  step  size.&lt;br /&gt;
     Hence,  the task id range specified by 2-10:2 would result in the task id indexes 2, 4, 6, 8, and 10, for a total of 5 identical tasks, each&lt;br /&gt;
     with the environment variable SGE_TASK_ID containing one of the 5 index numbers.&lt;br /&gt;
 &lt;br /&gt;
     Array  jobs  are  commonly  used to execute the same type of operation on varying input data sets correlated with the task index number. The&lt;br /&gt;
     number of tasks in a array job is unlimited.&lt;br /&gt;
 &lt;br /&gt;
     STDOUT and STDERR of array job tasks will be written into different files with the default location&lt;br /&gt;
 &lt;br /&gt;
     &amp;lt;jobname&amp;gt;.['e'|'o']&amp;lt;job_id&amp;gt;'.'&amp;lt;task_id&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Examples ===&lt;br /&gt;
==== Change the Size of the Run ====&lt;br /&gt;
Array Jobs have a variety of uses, one of the easiest to comprehend is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app1 I need to run the exact same way, on the same data set, with only the size of the run changing.&lt;br /&gt;
&lt;br /&gt;
My original script looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
RUNSIZE=50&lt;br /&gt;
#RUNSIZE=100&lt;br /&gt;
#RUNSIZE=150&lt;br /&gt;
#RUNSIZE=200&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
For every run of that job I have to change the RUNSIZE variable, and submit each script. This gets tedious.&lt;br /&gt;
&lt;br /&gt;
With Array Jobs the script can be written like so:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#$ -t 50:200:50&lt;br /&gt;
RUNSIZE=$SGE_TASK_ID&lt;br /&gt;
app1 $RUNSIZE dataset.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
I then submit that job, and SGE understands that it needs to run it 4 times, once for each task. It also knows that it can and should run these tasks in parallel.&lt;br /&gt;
&lt;br /&gt;
==== Choosing a Dataset ====&lt;br /&gt;
A slightly more complex use of Array Jobs is the following:&lt;br /&gt;
&lt;br /&gt;
I have an application, app2, that needs to be run against every line of my dataset. Every line changes how app2 runs slightly, but I need to compare the runs against each other.&lt;br /&gt;
&lt;br /&gt;
Originally I had to take each line of my dataset and generate a new submit script and submit the job. This was done with yet another script:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 DATASET=dataset.txt&lt;br /&gt;
 scriptnum=0&lt;br /&gt;
 while read LINE&lt;br /&gt;
 do&lt;br /&gt;
     echo &amp;quot;app2 $LINE&amp;quot; &amp;gt; ${scriptnum}.sh&lt;br /&gt;
     qsub ${scriptnum}.sh&lt;br /&gt;
     scriptnum=$(( $scriptnum + 1 ))&lt;br /&gt;
 done &amp;lt; $DATASET&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
Not only is this needlessly complex, it is also slow, as qsub has to verify each job as it is submitted. This can be done easily with array jobs, as long as you know the number of lines in the dataset. This number can be obtained like so: wc -l dataset.txt in this case lets call it 5000.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#$ -t 1:5000&lt;br /&gt;
app2 `sed -n &amp;quot;${SGE_TASK_ID}p&amp;quot; dataset.txt`&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
This uses a subshell via `, and has the sed command print out only the line number $SGE_TASK_ID out of the file dataset.txt.&lt;br /&gt;
&lt;br /&gt;
Not only is this a smaller script, it is also faster to submit because it is one job instead of 5000, so qsub doesn't have to verify as many.&lt;br /&gt;
&lt;br /&gt;
To give you an idea about time saved: submitting 1 job takes 1-2 seconds. by extension if you are submitting 5000, that is 5,000-10,000 seconds, or 1.5-3 hours.&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'qrsh'. qrsh uses the exact same command-line arguments as qsub. If no node is available with your resource requirements, qrsh will tell you&lt;br /&gt;
 Your &amp;quot;qrsh&amp;quot; request could not be scheduled, try again later.&lt;br /&gt;
Note that, like qsub, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The qacct tool will read SGE's accounting file and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== qacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
qacct -j 1122334455&lt;br /&gt;
# if you don't know the job id, you can look at your jobs over some number of days in this case the past 14 days:&lt;br /&gt;
qacct -o $USER -d 14 -j&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
 &amp;lt;tt&amp;gt;qname        batch.q             &lt;br /&gt;
 hostname     mage07.beocat       &lt;br /&gt;
 group        some_user_users        &lt;br /&gt;
 owner        some_user              &lt;br /&gt;
 project      BEODEFAULT          &lt;br /&gt;
 department   defaultdepartment   &lt;br /&gt;
 jobname      my_job_script.sh  &lt;br /&gt;
 jobnumber    1122334455          &lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 exit_status  1                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;ru_wallclock 1s&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;ru_utime     0.030s&lt;br /&gt;
 ru_stime     0.030s&lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 arid         undefined&lt;br /&gt;
 category     -u some_user -q batch.q,long.q -l h_rt=604800,mem_free=1024.0M,memory=2G&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you look at the line showing ru_wallclock. You can see that it shows 1s. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
 &amp;lt;tt&amp;gt;qname        batch.q             &lt;br /&gt;
 hostname     scout59.beocat      &lt;br /&gt;
 group        some_user_users     &lt;br /&gt;
 owner        some_user           &lt;br /&gt;
 project      BEODEFAULT          &lt;br /&gt;
 department   defaultdepartment   &lt;br /&gt;
 jobname      my_job_script.sh           &lt;br /&gt;
 jobnumber    1122334455            &lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...            &lt;br /&gt;
 slots        1                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;failed       37  : qmaster enforced h_rt, h_cpu, or h_vmem limit&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;exit_status  0                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;ru_wallclock 21600s&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;ru_utime     0.130s&lt;br /&gt;
 ru_stime     0.020s&lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 arid         undefined&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;category     -u some_user -q batch.q,long.q -l h_rt=21600,mem_free=512.0M,memory=1G&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you look at the lines showing failed, ru_wallclock and category we can see some pointers to the issue.&lt;br /&gt;
It didn't finish because the scheduler (qmaster) enforced some limit. If you look at the category line, the only limit requested was h_rt. So it was a runtime (wallclock) limit.&lt;br /&gt;
Comparing ru_wallclock and the h_rt request, we can see that it ran until the h_rt time was hit, and then the scheduler enforce the limit and killed the job. You will need to resubmit the job and ask for more time next time.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OLD_DEPRECATED_AdvancedSGE&amp;diff=67</id>
		<title>OLD DEPRECATED AdvancedSGE</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OLD_DEPRECATED_AdvancedSGE&amp;diff=67"/>
		<updated>2014-07-07T21:42:00Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added MPI instructions.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SGEBasics]] page, we have several other requestable resources. Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;qconf -sc | awk '{ if ($5 != &amp;quot;NO&amp;quot;) { print }}'&amp;lt;/tt&amp;gt;&lt;br /&gt;
{| class=&amp;quot;wikitable sortable&amp;quot;&lt;br /&gt;
!name&lt;br /&gt;
!shortcut&lt;br /&gt;
!type&lt;br /&gt;
!relop&lt;br /&gt;
!requestable&lt;br /&gt;
!consumable&lt;br /&gt;
!default&lt;br /&gt;
!urgency&lt;br /&gt;
|-&lt;br /&gt;
|arch&lt;br /&gt;
|a&lt;br /&gt;
|RESTRING&lt;br /&gt;
|==&lt;br /&gt;
|YES&lt;br /&gt;
|NO&lt;br /&gt;
|NONE&lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|avx&lt;br /&gt;
|avx&lt;br /&gt;
|BOOL        &lt;br /&gt;
|==     &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|FALSE    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|calendar            &lt;br /&gt;
|c          &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cpu                 &lt;br /&gt;
|cpu        &lt;br /&gt;
|DOUBLE      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cpu_flags           &lt;br /&gt;
|c_f        &lt;br /&gt;
|STRING      &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cuda                &lt;br /&gt;
|cuda       &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|JOB        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|display_win_gui     &lt;br /&gt;
|dwg        &lt;br /&gt;
|BOOL        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|exclusive           &lt;br /&gt;
|excl       &lt;br /&gt;
|BOOL        &lt;br /&gt;
|EXCL    &lt;br /&gt;
|YES         &lt;br /&gt;
|YES        &lt;br /&gt;
|0        &lt;br /&gt;
|1000&lt;br /&gt;
|-&lt;br /&gt;
|h_core              &lt;br /&gt;
|h_core     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_cpu               &lt;br /&gt;
|h_cpu      &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_data              &lt;br /&gt;
|h_data     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_fsize             &lt;br /&gt;
|h_fsize    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_rss               &lt;br /&gt;
|h_rss      &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_rt                &lt;br /&gt;
|h_rt       &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|FORCED      &lt;br /&gt;
|NO        &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_stack             &lt;br /&gt;
|h_stack    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_vmem              &lt;br /&gt;
|h_vmem     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|hostname            &lt;br /&gt;
|h          &lt;br /&gt;
|HOST        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|infiniband          &lt;br /&gt;
|ib         &lt;br /&gt;
|BOOL        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|FALSE    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_core              &lt;br /&gt;
|core       &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_socket            &lt;br /&gt;
|socket     &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_thread            &lt;br /&gt;
|thread     &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_topology          &lt;br /&gt;
|topo       &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_topology_inuse    &lt;br /&gt;
|utopo      &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_free            &lt;br /&gt;
|mf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_total           &lt;br /&gt;
|mt         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_used            &lt;br /&gt;
|mu         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0       &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|memory              &lt;br /&gt;
|mem        &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|FORCED      &lt;br /&gt;
|YES        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|num_proc            &lt;br /&gt;
|p          &lt;br /&gt;
|INT         &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|qname               &lt;br /&gt;
|q          &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_core              &lt;br /&gt;
|s_core     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_cpu               &lt;br /&gt;
|s_cpu      &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_data              &lt;br /&gt;
|s_data     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_fsize             &lt;br /&gt;
|s_fsize    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_rss               &lt;br /&gt;
|s_rss      &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_rt                &lt;br /&gt;
|s_rt       &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_stack             &lt;br /&gt;
|s_stack    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_vmem              &lt;br /&gt;
|s_vmem     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|slots               &lt;br /&gt;
|s          &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|YES        &lt;br /&gt;
|1        &lt;br /&gt;
|1000&lt;br /&gt;
|-&lt;br /&gt;
|swap_free           &lt;br /&gt;
|sf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_rate           &lt;br /&gt;
|sr         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_rsvd           &lt;br /&gt;
|srsv       &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_total          &lt;br /&gt;
|st         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_used           &lt;br /&gt;
|su         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_free        &lt;br /&gt;
|vf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_total       &lt;br /&gt;
|vt         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_used        &lt;br /&gt;
|vu         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The good news is that most of these nobody ever uses. There are a couple of exceptions, though:&lt;br /&gt;
=== Infiniband ===&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. Infiniband does absolutely no good if running in a 'single' parallel environment. Infiniband is a high-speed host-to-host communication fabric. It is used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested Infiniband, and all the nodes with Infiniband were currently busy. In fact, some of our fastest nodes do not have Infiniband, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;-l ib=true&amp;lt;/tt&amp;gt; to your qsub command-line.&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. We have a very small number of nodes which have GPUs installed. To request one of these nodes, add &amp;lt;tt&amp;gt;-l cuda=true&amp;lt;/tt&amp;gt; to your qsub command-line.&lt;br /&gt;
=== Exclusive ===&lt;br /&gt;
Some programs just don't play nicely with others. They will attempt to use all available memory or will try to use all the cores it can use. The way to be a nice neighbor if your program has this problem is to request exclusive use of a node with &amp;lt;tt&amp;gt;-l excl=true&amp;lt;/tt&amp;gt;. This can also be useful for benchmarking, where you can be sure that no other jobs are interfering with yours.&lt;br /&gt;
== Parallel Jobs ==&lt;br /&gt;
There are two ways jobs can run in parallel, ''intra''node and ''inter''node. '''Note: Beocat will not automatically make a job run in parallel.''' Have I said that enough? It's a common misperception.&lt;br /&gt;
=== Intranode jobs ===&lt;br /&gt;
Intranode jobs are easier to code and can take advantage of many common libraries, such as [http://openmp.org/wp/ OpenMP], or Java's threads. Many times, your program will need to know how many cores you want it to use. Many will use all available cores if not told explicitly otherwise. This can be a problem when you are sharing resources, as Beocat does. To request multiple cores, use the qsub directive '&amp;lt;tt&amp;gt;-pe single ''n''&amp;lt;/tt&amp;gt;', where ''n'' is the number of cores you wish to use. If your command can take an environment variable, you can use $nslots to tell how many cores you've been allocated.&lt;br /&gt;
=== Internode (MPI) jobs ===&lt;br /&gt;
&amp;quot;Talking&amp;quot; between nodes is trickier that talking between cores on the same node. The specification for doing so is called &amp;quot;[[wikipedia:Message_Passing_Interface|Message Passing Interface]]&amp;quot;, or MPI. We have [http://www.open-mpi.org/ OpenMPI] installed on Beocat for this purpose. Most programs written to take advantage of large multi-node systems will use MPI. You can tell if you have an MPI-enabled program because its directions will tell you to run '&amp;lt;tt&amp;gt;mpirun ''program''&amp;lt;/tt&amp;gt;'. Requesting MPI resources is only mildly more difficult than requesting single-node jobs. Instead of using '&amp;lt;tt&amp;gt;-pe single ''n''&amp;lt;/tt&amp;gt;' for your qsub request, you will use one of the following:&lt;br /&gt;
{|&lt;br /&gt;
|mpi-fill&lt;br /&gt;
|This environment will use as many slots on each node as it can until it reaches the number of cores you have requested.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-spread&lt;br /&gt;
|This environment will spread itself out over as many nodes as possible until it reaches the number of cores you have requested.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-1&lt;br /&gt;
|This environment will allocate the slots you've requested 1 per node.&lt;br /&gt;
|-&lt;br /&gt;
|mpi-2&lt;br /&gt;
|This environment will allocate the slots you've requested 2 per node. You must request cores as a multiple of 2&lt;br /&gt;
|-&lt;br /&gt;
|mpi-4&lt;br /&gt;
|This environment will allocate the slots you've requested 4 per node. You must request cores as a multiple of 4&lt;br /&gt;
|-&lt;br /&gt;
|mpi-8&lt;br /&gt;
|This environment will allocate the slots you've requested 8 per node. You must request cores as a multiple of 8&lt;br /&gt;
|-&lt;br /&gt;
|mpi-10&lt;br /&gt;
|This environment will allocate the slots you've requested 10 per node. You must request cores as a multiple of 10&lt;br /&gt;
|-&lt;br /&gt;
|mpi-12&lt;br /&gt;
|This environment will allocate the slots you've requested 12 per node. You must request cores as a multiple of 12&lt;br /&gt;
|-&lt;br /&gt;
|mpi-16&lt;br /&gt;
|This environment will allocate the slots you've requested 16 per node. You must request cores as a multiple of 16&lt;br /&gt;
|-&lt;br /&gt;
|mpi-80&lt;br /&gt;
|This environment will allocate the slots you've requested 80 per node. You must request cores as a multiple of 80&lt;br /&gt;
|}&lt;br /&gt;
Some quick examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-4 16&amp;lt;/tt&amp;gt; will give you 4 chunks of 4 cores apiece. They might all happen to be allocated on the same node (16 cores), on 4 different nodes (4 cores each), on 3 nodes (8 cores on one and 4 cores on the other two), or on 2 nodes (8 cores each).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-fill 40&amp;lt;/tt&amp;gt; will give you 40 cores, but will attempt to get them all on the same node.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-fill 100&amp;lt;/tt&amp;gt; will give you 100 cores, and place them on as few nodes as possible. In this case it's likely you would get a full mage (80 cores) and either part of another mage (the remaining 20 cores) or one of the 20-core elves.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-pe mpi-spread 40&amp;lt;/tt&amp;gt; will give you 40 cores, and will attempt to place each on a separate node.&lt;br /&gt;
== Requesting memory for multi-core jobs ==&lt;br /&gt;
All memory requests are '''per core'''. One of the more common scenarios is where somebody will need, say 20 cores and 400 GB of memory. So they will make a request like '&amp;lt;tt&amp;gt;-pe single 20, -l mem=400G&amp;lt;/tt&amp;gt;' This will never run, because what you are really requesting is 20 cores and 8000GB of memory (20 * 400). Since we have no nodes with 8000 terabytes of memory, the job will never run. In this case, you will divide the 400GB total memory request by the number of cores (20), so the correct command would be '&amp;lt;tt&amp;gt;-pe single 20, -l mem=20G&amp;lt;/tt&amp;gt;'.&lt;br /&gt;
== Running jobs interactively ==&lt;br /&gt;
Some jobs just don't behave like we think they should, or need to be run with somebody sitting at the keyboard and typing in response to the output the computers are generating. Beocat has a facility for this, called 'qrsh'. qrsh uses the exact same command-line arguments as qsub. If no node is available with your resource requirements, qrsh will tell you&lt;br /&gt;
 Your &amp;quot;qrsh&amp;quot; request could not be scheduled, try again later.&lt;br /&gt;
Note that, like qsub, your interactive job will timeout after your allotted time has passed.&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what their job did during its run. The qacct tool will read SGE's accounting file and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== qacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
qacct -j 1122334455&lt;br /&gt;
# if you don't know the job id, you can look at your jobs over some number of days in this case the past 14 days:&lt;br /&gt;
qacct -o $USER -d 14 -j&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
 &amp;lt;tt&amp;gt;qname        batch.q             &lt;br /&gt;
 hostname     mage07.beocat       &lt;br /&gt;
 group        some_user_users        &lt;br /&gt;
 owner        some_user              &lt;br /&gt;
 project      BEODEFAULT          &lt;br /&gt;
 department   defaultdepartment   &lt;br /&gt;
 jobname      my_job_script.sh  &lt;br /&gt;
 jobnumber    1122334455          &lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 exit_status  1                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;ru_wallclock 1s&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;ru_utime     0.030s&lt;br /&gt;
 ru_stime     0.030s&lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 arid         undefined&lt;br /&gt;
 category     -u some_user -q batch.q,long.q -l h_rt=604800,mem_free=1024.0M,memory=2G&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you look at the line showing ru_wallclock. You can see that it shows 1s. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
 &amp;lt;tt&amp;gt;qname        batch.q             &lt;br /&gt;
 hostname     scout59.beocat      &lt;br /&gt;
 group        some_user_users     &lt;br /&gt;
 owner        some_user           &lt;br /&gt;
 project      BEODEFAULT          &lt;br /&gt;
 department   defaultdepartment   &lt;br /&gt;
 jobname      my_job_script.sh           &lt;br /&gt;
 jobnumber    1122334455            &lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...            &lt;br /&gt;
 slots        1                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;failed       37  : qmaster enforced h_rt, h_cpu, or h_vmem limit&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;exit_status  0                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;ru_wallclock 21600s&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;ru_utime     0.130s&lt;br /&gt;
 ru_stime     0.020s&lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 arid         undefined&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;category     -u some_user -q batch.q,long.q -l h_rt=21600,mem_free=512.0M,memory=1G&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you look at the lines showing failed, ru_wallclock and category we can see some pointers to the issue.&lt;br /&gt;
It didn't finish because the scheduler (qmaster) enforced some limit. If you look at the category line, the only limit requested was h_rt. So it was a runtime (wallclock) limit.&lt;br /&gt;
Comparing ru_wallclock and the h_rt request, we can see that it ran until the h_rt time was hit, and then the scheduler enforce the limit and killed the job. You will need to resubmit the job and ask for more time next time.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
	<entry>
		<id>https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OLD_DEPRECATED_AdvancedSGE&amp;diff=65</id>
		<title>OLD DEPRECATED AdvancedSGE</title>
		<link rel="alternate" type="text/html" href="https://support.beocat.ksu.edu/BeocatDocs/index.php?title=OLD_DEPRECATED_AdvancedSGE&amp;diff=65"/>
		<updated>2014-07-03T21:58:41Z</updated>

		<summary type="html">&lt;p&gt;Kylehutson: Added Resource Requests section. Still needs MPI documented.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Resource Requests ==&lt;br /&gt;
Aside from the time, RAM, and CPU requirements listed on the [[SGEBasics]] page, we have several other requestable resources. Generally, if you don't know if you need a particular resource, you should use the default. These can be generated with the command&lt;br /&gt;
 &amp;lt;tt&amp;gt;qconf -sc | awk '{ if ($5 != &amp;quot;NO&amp;quot;) { print }}'&amp;lt;/tt&amp;gt;&lt;br /&gt;
{|&lt;br /&gt;
!name&lt;br /&gt;
!shortcut&lt;br /&gt;
!type&lt;br /&gt;
!relop&lt;br /&gt;
!requestable&lt;br /&gt;
!consumable&lt;br /&gt;
!default&lt;br /&gt;
!urgency&lt;br /&gt;
|-&lt;br /&gt;
|arch&lt;br /&gt;
|a&lt;br /&gt;
|RESTRING&lt;br /&gt;
|==&lt;br /&gt;
|YES&lt;br /&gt;
|NO&lt;br /&gt;
|NONE&lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|avx&lt;br /&gt;
|avx&lt;br /&gt;
|BOOL        &lt;br /&gt;
|==     &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|FALSE    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|calendar            &lt;br /&gt;
|c          &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cpu                 &lt;br /&gt;
|cpu        &lt;br /&gt;
|DOUBLE      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cpu_flags           &lt;br /&gt;
|c_f        &lt;br /&gt;
|STRING      &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|cuda                &lt;br /&gt;
|cuda       &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|JOB        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|display_win_gui     &lt;br /&gt;
|dwg        &lt;br /&gt;
|BOOL        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|exclusive           &lt;br /&gt;
|excl       &lt;br /&gt;
|BOOL        &lt;br /&gt;
|EXCL    &lt;br /&gt;
|YES         &lt;br /&gt;
|YES        &lt;br /&gt;
|0        &lt;br /&gt;
|1000&lt;br /&gt;
|-&lt;br /&gt;
|h_core              &lt;br /&gt;
|h_core     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_cpu               &lt;br /&gt;
|h_cpu      &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_data              &lt;br /&gt;
|h_data     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_fsize             &lt;br /&gt;
|h_fsize    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_rss               &lt;br /&gt;
|h_rss      &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_rt                &lt;br /&gt;
|h_rt       &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|FORCED      &lt;br /&gt;
|NO        &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_stack             &lt;br /&gt;
|h_stack    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|h_vmem              &lt;br /&gt;
|h_vmem     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|hostname            &lt;br /&gt;
|h          &lt;br /&gt;
|HOST        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|infiniband          &lt;br /&gt;
|ib         &lt;br /&gt;
|BOOL        &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|FALSE    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_core              &lt;br /&gt;
|core       &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_socket            &lt;br /&gt;
|socket     &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_thread            &lt;br /&gt;
|thread     &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_topology          &lt;br /&gt;
|topo       &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|m_topology_inuse    &lt;br /&gt;
|utopo      &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_free            &lt;br /&gt;
|mf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_total           &lt;br /&gt;
|mt         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|mem_used            &lt;br /&gt;
|mu         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0       &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|memory              &lt;br /&gt;
|mem        &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|FORCED      &lt;br /&gt;
|YES        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|num_proc            &lt;br /&gt;
|p          &lt;br /&gt;
|INT         &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|qname               &lt;br /&gt;
|q          &lt;br /&gt;
|RESTRING    &lt;br /&gt;
|==      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|NONE     &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_core              &lt;br /&gt;
|s_core     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO        &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_cpu               &lt;br /&gt;
|s_cpu      &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_data              &lt;br /&gt;
|s_data     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_fsize             &lt;br /&gt;
|s_fsize    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_rss               &lt;br /&gt;
|s_rss      &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_rt                &lt;br /&gt;
|s_rt       &lt;br /&gt;
|TIME        &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0:0:0    &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_stack             &lt;br /&gt;
|s_stack    &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|s_vmem              &lt;br /&gt;
|s_vmem     &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|slots               &lt;br /&gt;
|s          &lt;br /&gt;
|INT         &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|YES        &lt;br /&gt;
|1        &lt;br /&gt;
|1000&lt;br /&gt;
|-&lt;br /&gt;
|swap_free           &lt;br /&gt;
|sf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_rate           &lt;br /&gt;
|sr         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_rsvd           &lt;br /&gt;
|srsv       &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_total          &lt;br /&gt;
|st         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|swap_used           &lt;br /&gt;
|su         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_free        &lt;br /&gt;
|vf         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_total       &lt;br /&gt;
|vt         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;lt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|-&lt;br /&gt;
|virtual_used        &lt;br /&gt;
|vu         &lt;br /&gt;
|MEMORY      &lt;br /&gt;
|&amp;gt;=      &lt;br /&gt;
|YES         &lt;br /&gt;
|NO         &lt;br /&gt;
|0        &lt;br /&gt;
|0&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The good news is that most of these nobody ever uses. There are a couple of exceptions, though:&lt;br /&gt;
=== Infiniband ===&lt;br /&gt;
First of all, let me state that just because it sounds &amp;quot;cool&amp;quot; doesn't mean you need it or even want it. Infiniband does absolutely no good if running in a 'single' parallel environment. Infiniband is a high-speed host-to-host communication fabric. It is used in conjunction with MPI jobs (discussed below). Several times we have had jobs which could run just fine, except that the submitter requested Infiniband, and all the nodes with Infiniband were currently busy. In fact, some of our fastest nodes do not have Infiniband, so by requesting it when you don't need it, you are actually slowing down your job. To request Infiniband, add &amp;lt;tt&amp;gt;-l ib=true&amp;lt;/tt&amp;gt; to your qsub command-line.&lt;br /&gt;
=== CUDA ===&lt;br /&gt;
[[CUDA]] is the resource required for GPU computing. We have a very small number of nodes which have GPUs installed. To request one of these nodes, add &amp;lt;tt&amp;gt;-l cuda=true&amp;lt;/tt&amp;gt; to your qsub command-line.&lt;br /&gt;
== Job Accounting ==&lt;br /&gt;
Some people may find it useful to know what there job did during its run. The qacct tool will read SGE's accounting file and give you summarized or detailed views on jobs that have run within Beocat.&lt;br /&gt;
=== qacct ===&lt;br /&gt;
This data can usually be used to diagnose two very common job failures.&lt;br /&gt;
==== Job debugging ====&lt;br /&gt;
It is simplest if you know the job number of the job you are trying to get information on.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line&amp;gt;&lt;br /&gt;
# if you know the jobid, put it here:&lt;br /&gt;
qacct -j 1122334455&lt;br /&gt;
# if you don't know the job id, you can look at your jobs over some number of days in this case the past 14 days:&lt;br /&gt;
qacct -o $USER -d 14 -j&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===== My job didn't do anything when it ran! =====&lt;br /&gt;
 &amp;lt;tt&amp;gt;qname        batch.q             &lt;br /&gt;
 hostname     mage07.beocat       &lt;br /&gt;
 group        some_user_users        &lt;br /&gt;
 owner        some_user              &lt;br /&gt;
 project      BEODEFAULT          &lt;br /&gt;
 department   defaultdepartment   &lt;br /&gt;
 jobname      my_job_script.sh  &lt;br /&gt;
 jobnumber    1122334455          &lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 exit_status  1                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;ru_wallclock 1s&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;ru_utime     0.030s&lt;br /&gt;
 ru_stime     0.030s&lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 arid         undefined&lt;br /&gt;
 category     -u some_user -q batch.q,long.q -l h_rt=604800,mem_free=1024.0M,memory=2G&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you look at the line showing ru_wallclock. You can see that it shows 1s. This means that the job started and then promptly ended. This points to something being wrong with your submission script. Perhaps there is a typo somewhere in it.&lt;br /&gt;
&lt;br /&gt;
===== My job ran but didn't finish! =====&lt;br /&gt;
 &amp;lt;tt&amp;gt;qname        batch.q             &lt;br /&gt;
 hostname     scout59.beocat      &lt;br /&gt;
 group        some_user_users     &lt;br /&gt;
 owner        some_user           &lt;br /&gt;
 project      BEODEFAULT          &lt;br /&gt;
 department   defaultdepartment   &lt;br /&gt;
 jobname      my_job_script.sh           &lt;br /&gt;
 jobnumber    1122334455            &lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...            &lt;br /&gt;
 slots        1                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;failed       37  : qmaster enforced h_rt, h_cpu, or h_vmem limit&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;exit_status  0                   &amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;ru_wallclock 21600s&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt&amp;gt;ru_utime     0.130s&lt;br /&gt;
 ru_stime     0.020s&lt;br /&gt;
 ...&lt;br /&gt;
 snipped to save space&lt;br /&gt;
 ...&lt;br /&gt;
 arid         undefined&amp;lt;/tt&amp;gt;&lt;br /&gt;
 &amp;lt;tt style=&amp;quot;color: red&amp;quot;&amp;gt;category     -u some_user -q batch.q,long.q -l h_rt=21600,mem_free=512.0M,memory=1G&amp;lt;/tt&amp;gt;&lt;br /&gt;
If you look at the lines showing failed, ru_wallclock and category we can see some pointers to the issue.&lt;br /&gt;
It didn't finish because the scheduler (qmaster) enforced some limit. If you look at the category line, the only limit requested was h_rt. So it was a runtime (wallclock) limit.&lt;br /&gt;
Comparing ru_wallclock and the h_rt request, we can see that it ran until the h_rt time was hit, and then the scheduler enforce the limit and killed the job. You will need to resubmit the job and ask for more time next time.&lt;/div&gt;</summary>
		<author><name>Kylehutson</name></author>
	</entry>
</feed>