From Beocat
Revision as of 17:09, 1 October 2024 by Nathanrwells (talk | contribs)
Jump to: navigation, search

To access the Nautilus namespace, first make an account at https://portal.nrp-nautilus.io/ . Once you have done so, email beocat@cs.ksu.edu and request to be added to the Beocat Nautilus namespace (ksu-nrp-cluster). Once you have received notification that you have been added to the namespace, you can continue with the following steps to get set up to use the cluster resources.

  1. SSH into headnode.beocat.ksu.edu 2. SSH into fiona (fiona hosts the kubectl tool we will use for this assignment) 3. Once on fiona, use the command ‘cd ~’ to ensure you are in your home directory. If you are not, this will return you to the top level of your home directory. 4. From there you will need to create a .kube directory inside of your home directory. Use the command ‘mkdir ~/.kube’ 5. Login to https://portal.nrp-nautilus.io/ using the same login previously used to create your account (this will be your K-State EID login) 6. From here it is MANDATORY to read the cluster policy documentation provided by the National Research Platform for the Nautilus program. You can find this here. https://docs.nationalresearchplatform.org/userdocs/start/policies/ &nbsp a. This is to ensure we do not break any of the rules put in place by the NRP. 7. Next, return to the website specified in step 5, in the top right corner of the page press the “Get Config” option. a. This will download a file called ‘config’ 8. You will need to move the file to your ~/.kube directory created in step 4. a. To do this you can copy and paste the contents through the command line b. You can also utilize the OpenOnDemand tool to upload the file through the web interface. Information for this tool can be found here: https://support.beocat.ksu.edu/Docs/OpenOnDemand c. You can also use other means of moving the contents to the Beocat headnodes/your home directory, but these are just a few examples. d. NOTE: Because we added a period before the filename it is now a hidden file, and the file will not appear when running a normal ‘ls’, to see the file you will need to run “ls -a” or “ls -la”. 9. Once you have read the required documentation, created the .kube directory in your home directory, and placed the config file in this folder, you are now ready to continue! 10. Download the supplied pod1.yaml file. Next edit the file and change the “name:” field underneath “metadata:”. Change the text “test-pod” to “{eid}-pod” where ‘{eid}’ is your K-State ID. It will look something like this “dan-pod”. 11. Place this file in the same directory created earlier (~/.kube). 12. If you are not already in the .kube directory enter the command “cd ~/.kube” to change your current directory. 13. Now we are going to create our ‘pod’. This will request a ubuntu pc using the specifications we specified earlier in pod1.yaml. a. To do this enter the command “kubectl create -f pod1.yaml” NOTE: You must be in the same directory that you placed the pod1.yaml file in. b. If the command is successful you will see an output of “pod/{eid}-pod created”. 14. You will need to wait until the container for the pod is finished creating. You can check this by running “kubectl get pods” a. Once you run this command, it will output all the pods currently running or being created in the namespace. Look for yours among the list of pods, the name will be the same name specified in step 10. b. Once you locate your pod, check its STATUS. If the pod says Running, then you are good to proceed. If it says ContainerCreating, then you will need to wait just a bit. It should not take long. 15. You can now execute and enter the pod by running “kubectl exec -it {eid}-pod -- /bin/bash”. Where ‘{eid}-pod’ is the pod created in step 13/the name specified in step 10. a. Executing this command will open the pod you created and run a bash console on the pod. b. From here you will be able to do most things you would normally do in a ubuntu pc. c. NOTE: If you have trouble logging into the pod, and are met with a “You must be logged in to the server, you can run “kubectl proxy”, and after a moment, you can cancel the command with a “crtl+c”. This should remedy the error. 16. Now for the actual assignment to get results that we will use to test against Beocat. 17. From here we need to run a few commands that will set up the pod so that we can compile and run C programs inside of it, as the pod is a barebones ubuntu pc with not many packages installed. a. Follow this list of commands: i. $apt-get update ii. $apt-get install build-essentials (accept the certificate request if prompted, as well as the space utilization request) iii. To check and make sure the GCC compiler is installed we can use $gcc --version iv. $apt install net-tools v. Install your favorite text editor (vim or nano typically) 1. $apt install vim a. Or 2. $apt install nano 18. Now with the prerequisites out of the way, create a program in C that will measure the time it takes to gather the hostname of the computer. Accomplish this by gathering the time at the start of your program and then gathering it again at the end of the program. You will then need to print out both the Hostname and the time taken to gather the Hostname using print statements. a. With GCC you can compile it by entering the command “gcc -o SomeName SomeOtherName.c” where SomeName is the file you want to create for execution and SomeOtherName.c is the file you wish to input to the compiler. 19. Once you are done with the pod and you exit out, make sure you delete it. To do this, on fiona, run the command “kubectl delete pod {eid}-pod” where the “{eid}-pod” is the pod name specified in step 10. 20. Using the same C script you wrote in step 18, get the same measurements through the slurm job manager (sbatch) on one of the headnodes of Beocat. As well, add a constraint to only use the moles. 21. Compare these two outputs and write an analysis of the time difference. Include observations such as “Why would it be beneficial to use pod based computing”, feel free to include other observations. Turn in this analysis as well as a screenshot of both of the time taken measurements and the hostnames as well as the C files used for computation to canvas in a zip file.