Hadoop Developer – WordCount tutorial using Maven and NetBeans 7.3RC2

I have adapted the WordCount tutorial to Maven based development as this probably the most popular way to develop in companies. I am not going to rewrite how the WordCount tutorial works but it aims to get you up-and-running with Hadoop development pretty quickly.

I used NetBeans 7.3RC2 because of its integration with Maven but feel free to use an IDE of your choice. I am also using Ubuntu 12.10 64Bit as a development enviroment. I installed the Hadoop debian distribution package.

Warning
When running your WordCount application, Hadoop might throw an out of memory exception, this is because the default settings are -Xmx100m. Apache website mentioned how to fix it but it’s not relevant if you install it using the Debian distribution. Here is a quick solution, open the /usr/bin/hadoop (changing this file has no effect and doesn’t fix the problem /etc/hadoop/hadoop-env.sh):

  1. set your JAVA to the actual JVM path that you want to use.
  2. set JAVA_HEAP_MAX to increase the available memory to the applications i.e. -Xmx3000m
Here are the steps to creating the WordCount tutorial in NetBeans:
  1. Create a new Maven based Java project
    • NetBeans will create an App.java class, you can rename it to WordCount or leave it as it doesn’t affect the outcome of the tutorial. I will refer to the main class as App.java.
  2. Add the Hadoop dependencies, they are available in Maven Central. I used the hadoop-core.1.1.1 for this tutorial.
  3. Important: Maven doesn’t package dependencies when building application unless you are working with a “war” project where it will create a lib folder. In order to make sure that our libraries are available to the our program when packaged, we need to add the maven-assembly-plugin to our pom.xml. We also declare a our “Main” class which will be used to execute the program.
  4. Open App.java (or whatever you have renamed it to) and write the following:

You can create your Hadoop “input” directory and mount it to be HDFS then execute the following:

$ hadoop dfs -ls input

$ hadoop dfs -cat input/file01 

$ hadoop jar WordCount.jar com.etapix.wordcount.App input output

This is assuming that you are running from your project home directory and that you have installed Hadoop using the Debian distribution or you can follow the rest of the tutorial from the Apache website

big-data-3

Big Data, Bigger Myths

Working at a company which focuses in finding value in data, I often come across clients asking me about the following:

1.      How “BIG” should my data be in order to considered “Big Data”

Not every company has a petabyte worth of data stored, so does size really matter? The simple answer is no. Companies should not think about “Big Data” in term of sizes but as a new paradigm. Data is stored across multiple departments which do not know how to share it and therefore impacting data-driven decision. Big Data is about accounting all enterprise data to help make timely data-driven decisions. As it was once said, all information can be accessed through few mouse clicks.

2.      SQL based systems can’t do “Big Data”

This myth was created by inexperienced data “scientists” who were trying to sell their offerings. I was once in a meeting back in 2007 with a large client; they run casinos, betting sites, bingos and etc.., when a member of their tech team asked us; why can you not just run a set of SQL queries instead of exporting to an external application (built on top of Hadoop)? What I did was to take his question offline and run a demo for him. Here is what the requirement was: calculate the distance from all our members (>20m+) to each other, tell us what clubs casinos bingo are they member of, what is the closest clubs casinos bingo to them, how often do they visit the establishments and etc… Now just the distance calculation alone blew the
Oracle server away. Don’t get me wrong, SQL based systems are part of “Big Data” as they are the best way to store, retrieve the data. For simple analytics, SQL provides great tools and they are usually more mature than their NoSQL counterparts.

3.    NoSQL is the way forward and Hadoop is the Holy Grail

This is a funny one. The NoSQL started as death to traditional RDBMS. Startups companies started to jump on the buzz wagon. There were NoSQL evangelist at every street corner, ok maybe not but you
get the point. And the early adopters started to see problems in the movement. Experienced data admins from the SQL world started converting then they stopped, why? To run an enterprise system, you need reliable mature application with a wealth of talents and knowledge to support it. In the SQL world, you had books and courses available for over 30 years+. It was easy to attract new talents for new projects and not to mention the tools which made life easier. Let’s get it clear, NoSQL systems are mainly used for data storage just as their SQL counter parts. Many techniques developed through decades of research around
fault tolerance, data replication and security have passed maturity and let’s not forget the compliance of industry standard. Look at it this way, Twitter still use MySQL. Hadoop is a distributed data processing framework which can also be achieve with Grid Computing, Peer 2 Peer systems and others.

4.    Data scientists are to Big Data What DB admin are to RDBMS

What is a data scientist and how is it different to DB admin? No difference at all. They both do exactly the same job, trying to get value out of data. This is another of those buzzwords that publishers use to sell books and increase pages views on the net. I worked with great DB admins that new the data structure and understood it uses. We could ask the DB admin about any KPIs and he would retrieve if it was possible. If you need to write Java or any form of codes, then I’m sorry you no longer a data scientist but a programmer.

5.    Big Data is a silver bullet

First of all there is no such thing as a silver bullet and big data is not an application to be implemented. Big Data is mind-set to data: capture, storing and processing. We should not think in terms of; this data is owned by X department. The data need to be integrated to give us a single view of the company. The finance department can effectively assess revenues based on marketing campaign and marketing campaign can better understand the customer based on information from the customer support team. The possibilities are endless. There is no silver bullet but we can come very close to it if we change our mindset.
Feel free to share your views in the comments section.
Join the Big Data London group on Google+