Hadoop Developer – WordCount tutorial using Maven and NetBeans 7.3RC2
I used NetBeans 7.3RC2 because of its integration with Maven but feel free to use an IDE of your choice. I am also using Ubuntu 12.10 64Bit as a development enviroment. I installed the Hadoop debian distribution package.
Warning
When running your WordCount application, Hadoop might throw an out of memory exception, this is because the default settings are -Xmx100m. Apache website mentioned how to fix it but it’s not relevant if you install it using the Debian distribution. Here is a quick solution, open the /usr/bin/hadoop (changing this file has no effect and doesn’t fix the problem /etc/hadoop/hadoop-env.sh):
- set your JAVA to the actual JVM path that you want to use.
- set JAVA_HEAP_MAX to increase the available memory to the applications i.e. -Xmx3000m
- Create a new Maven based Java project
- NetBeans will create an App.java class, you can rename it to WordCount or leave it as it doesn’t affect the outcome of the tutorial. I will refer to the main class as App.java.
- Add the Hadoop dependencies, they are available in Maven Central. I used the hadoop-core.1.1.1 for this tutorial.
- Important: Maven doesn’t package dependencies when building application unless you are working with a “war” project where it will create a lib folder. In order to make sure that our libraries are available to the our program when packaged, we need to add the maven-assembly-plugin to our pom.xml. We also declare a our “Main” class which will be used to execute the program.
- Open App.java (or whatever you have renamed it to) and write the following:
$ hadoop dfs -ls input
$ hadoop dfs -cat input/file01
$ hadoop jar WordCount.jar com.etapix.wordcount.App input output
This is assuming that you are running from your project home directory and that you have installed Hadoop using the Debian distribution or you can follow the rest of the tutorial from the Apache website.
Big Data is not a product – the idiosyncratic hype
Recruiters
“Where ignorance is bliss, ’tis folly to be wise.” – Thomas Grays
of. After a while, he became aggressive and I had to end the call. I blame the recruiter for lack of research and its neglect about “Big Data” and its relevant technologies. If he did carry out some research about the topic, I believe that he could have appeared more professional.
Engineers
Big Data, Bigger Myths
1. How “BIG” should my data be in order to considered “Big Data”
2. SQL based systems can’t do “Big Data”
Oracle server away. Don’t get me wrong, SQL based systems are part of “Big Data” as they are the best way to store, retrieve the data. For simple analytics, SQL provides great tools and they are usually more mature than their NoSQL counterparts.
3. NoSQL is the way forward and Hadoop is the Holy Grail
get the point. And the early adopters started to see problems in the movement. Experienced data admins from the SQL world started converting then they stopped, why? To run an enterprise system, you need reliable mature application with a wealth of talents and knowledge to support it. In the SQL world, you had books and courses available for over 30 years+. It was easy to attract new talents for new projects and not to mention the tools which made life easier. Let’s get it clear, NoSQL systems are mainly used for data storage just as their SQL counter parts. Many techniques developed through decades of research around
fault tolerance, data replication and security have passed maturity and let’s not forget the compliance of industry standard. Look at it this way, Twitter still use MySQL. Hadoop is a distributed data processing framework which can also be achieve with Grid Computing, Peer 2 Peer systems and others.
4. Data scientists are to Big Data What DB admin are to RDBMS
5. Big Data is a silver bullet