I used NetBeans 7.3RC2 because of its integration with Maven but feel free to use an IDE of your choice. I am also using Ubuntu 12.10 64Bit as a development enviroment. I installed the Hadoop debian distribution package.
When running your WordCount application, Hadoop might throw an out of memory exception, this is because the default settings are -Xmx100m. Apache website mentioned how to fix it but it’s not relevant if you install it using the Debian distribution. Here is a quick solution, open the /usr/bin/hadoop (changing this file has no effect and doesn’t fix the problem /etc/hadoop/hadoop-env.sh):
- set your JAVA to the actual JVM path that you want to use.
- set JAVA_HEAP_MAX to increase the available memory to the applications i.e. -Xmx3000m
- Create a new Maven based Java project
- NetBeans will create an App.java class, you can rename it to WordCount or leave it as it doesn’t affect the outcome of the tutorial. I will refer to the main class as App.java.
$ hadoop dfs -ls input
$ hadoop dfs -cat input/file01
$ hadoop jar WordCount.jar com.etapix.wordcount.App input output
This is assuming that you are running from your project home directory and that you have installed Hadoop using the Debian distribution or you can follow the rest of the tutorial from the Apache website.