From Beocat
Jump to: navigation, search
(minor notes on best practices)
No edit summary
 
(5 intermediate revisions by the same user not shown)
Line 2: Line 2:
[http://hadoop.apache.org/ Hadoop] is a "Big Data" distributed processing service. It is primarily used for very large data sets (greater than 1 TB).
[http://hadoop.apache.org/ Hadoop] is a "Big Data" distributed processing service. It is primarily used for very large data sets (greater than 1 TB).


Hadoop does not integrate well with SGE (or, for that matter, any other HPC scheduling system). So we have created our own separate Cloudera Hadoop cluster to accommodate the increased usage of Hadoop on campus.
Our previous Hadoop cluster has died, and we are evaluating other options. One such option is [https://github.com/LLNL/magpie/tree/master/doc magpie]. We will let the list know if and when we have something functional.
 
To use Hadoop:
* Login to Beocat
* From there login to the Hadoop headnode, named 'theia'. <tt>ssh theia</tt>
* Copy files into or out of the Hadoop filesystem. Use <tt>hadoop fs put</tt> and <tt>hadoop fs get</tt> to copy files. Note that the Hadoop filesystem is both smaller than the Beocat filesystem and is not backed up. Please copy data back out of Hadoop as soon as you are done using it. '''Data which remains untouched may be deleted with no prior notice.'''
* Run your Hadoop job. <tt>hadoop -jar path/to/file.jar</tt>
 
=== (Some) Best Practices ===
* Block size is set to 64MB on the HDFS filesystem
* As such, please keep the files stored there at least that size. If you need smaller files, we recommend using [http://hadoop.apache.org/docs/r1.2.1/hadoop_archives.html HAR files]
* Multiple users running jobs at the same time can be problematic, as they can slow each other down. If you can, try to run jobs when the cluster isn't already running someone else's jobs.
* you can check the status of the hadoop cluster from any beocat host with <tt>elinks http://theia.beocat:50030</tt>

Latest revision as of 15:50, 1 May 2018

Hadoop

Hadoop is a "Big Data" distributed processing service. It is primarily used for very large data sets (greater than 1 TB).

Our previous Hadoop cluster has died, and we are evaluating other options. One such option is magpie. We will let the list know if and when we have something functional.