Latest blog on category "Hadoop"

Big data is everywhere and there is an urgent need to collect or preserve the data being generated by Companies.

Breaking down the normal pattern of Hadoop job postings in the USA, Hadoop designer's compensation averages around $110,000.

It has created the need for a more organized file system for storage and processing of data, when it is observed a sudden increase in the volume of data from the order of gigabytes to zettabytes.


All scripts are run on a single machine without requiring Hadoop MapReduce and HDFS. This can be useful for developing and testing Pig logic.

Moving data and running different kinds of applications in Hadoop is great stuff, but it’s only half the battle.

Big data is all about applying analytics to more data, for more people.

We have already seen the Pig architecture and Pig Latin Application flow. We also learn the Pig Design principle in the previous post.

Pig Latin is the programing platform which provides a language for Pig programs. Pig helps to convert the Pig Latin script into MapReduce tasks that can be run within Hadoop cluster.

Java MapReduce programs and the Hadoop Distributed File System (HDFS) provide us with a powerful distributed computing framework, but they come with one major drawback

In my previous post, I have explained various Hadoop file system commands, in which I also explained about the “ls command”.

The core concept of HDFS is that it can be made up of dozens, hundreds, or even thousands of individual computers, where the system’s files are stored in directly attached disk drives.

Hadoop is primarily structured and designed to be deployed on a massive cluster of networked systems or nodes,

After we have stored piles and piles of data in HDFS (a distributed storage system spread over an expandable cluster of individual slave nodes),

Here we enlist and identify some common codecs that are supported by the Hadoop framework.

Besides, the major contribution of Amazon EMR services and its other related tools, many other companies also provide certain useful Hadoop Tools enlisted as following:

Though MapReduce as a technology is relatively new, it builds upon much of the fundamental work from both mathematics and computer science,

Besides Cloudera, there are few other popular Hadoop distribution which are well implemented for commercial and development purposes.

MapReduce comprises the sequential processing of operations on distributed volumes of data sets.