LATEST BLOG TAGGED ON "HADOOP"

HDFS vs. HBase : All you need to know

It has created the need for a more organized file system for storage and processing of data, when it is observed a sudden increase in the volume of data from the order of gigabytes to zettabytes.

Hadoop 

Switching from Java to Big Data/Hadoop Career: The Whys and Hows

Once in a while, there may come a feeling that you are stuck in the same job profile and living a monotonous professional life. This generally leads to the realization that a change in your profile is much needed.

HBase Architecture: Introduction and RegionServers(Part-1)

The reason that folks such as chief financial officers are excited by the thought of using Hadoop is that it lets us store massive amounts of data across a cluster of low cost commodity servers — that’s music to the ears of financially minded people.

Big Data: HBase as Distributed, Persistent, Multidimensional Sorted Map

Now we are very well familiar with the power packed characteristics and nature of Hbase.

Pig Script Interfaces and Mode of Running in Hadoop

All scripts are run on a single machine without requiring Hadoop MapReduce and HDFS. This can be useful for developing and testing Pig logic.

Introduction to Ooize in Hadoop

Moving data and running different kinds of applications in Hadoop is great stuff, but it’s only half the battle. For Hadoop’s efficiencies to truly start paying off for us, start thinking about how we can tie together a number of these actions to for

Clustering and Classification with Mahout

Unlike the supervised learning method described earlier for Mahout’s recommendation engine feature, clustering is a kind of unsupervised learning — where the data labels points are not known ahead of time and should be inferred from the data without

Statistical Analysis in Hadoop

Big data is all about applying analytics to more data, for more people. To carry out this task, big data practitioners use new tools — such as Hadoop — to explore and understand data in ways that previously might not have been possible (challenges that were “too complex,” “too expensive,” or “too slow”). Some of the “bigger analytics” that we often hear mentioned when Hadoop comes up in a conversation revolve around concepts such as machine learning, data mining, and predictive analytics.

MapReduce Driver Class:

Although the mapper and reducer implementations are all we need to perform the MapReduce job, there is one more piece of code necessary in MapReduce:

Pig Data Types in Hadoop

We have already seen the Pig architecture and Pig Latin Application flow. We also learn the Pig Design principle in the previous post.

Pig Design Principles in Hadoop

Pig Latin is the programing platform which provides a language for Pig programs. Pig helps to convert the Pig Latin script into MapReduce tasks that can be run within Hadoop cluster.

Introduction to Pig in Hadoop

Java MapReduce programs and the Hadoop Distributed File System (HDFS) provide us with a powerful distributed computing framework, but they come with one major drawback — relying on them limits the use of Hadoop to Java programmers who can think in Map and Reduce terms when writing programs.

YARN’s Resource Management

The most key component of YARN is the Resource Manager, which governs and maintains all the data processing resources in the Hadoop cluster. In other words, the Resource Manager is a dedicated scheduler who has a task to assigns resources to requesti

Hadoop File System Commands: ls Command Output Analysis

In this post, I have explained about Hadoop File System Commands.

HDFS Architecture in Hadoop

The core concept of HDFS is that it can be made up of dozens, hundreds, or even thousands of individual computers, where the system’s files are stored in directly attached disk drives.

Data Replication in Hadoop: Slave node disk failures (Part -2)

Hadoop was originally designed with an intention to store petabyte data at the scale, with any Potential limitations to scaling out are minimized.

Three modes of Hadoop Cluster Architecture

Hadoop is primarily structured and designed to be deployed on a massive cluster of networked systems or nodes, featuring master nodes (which host the services that maintains Hadoop’s storage and manipulating power ) and slave nodes (where the data sets are stored and processed). We can, however, run Hadoop on a single computer, which is a great way to learn the basics of Hadoop by experimenting in a controlled space.

Why we need Map-Reduce in Hadoop?

After we have stored piles and piles of data in HDFS (a distributed storage system spread over an expandable cluster of individual slave nodes), the first question that comes to mind is “How can we analyse or query this data?” Transferring all this data to a central node for processing isn’t going to work, since this way, we will be waiting forever for the data to transfer over the network (not to mention waiting for everything to be processed serially). So what’s the solution? And the solution is “MapReduce”!!

Writing and Reading Data from HDFS

For creating new files in HDFS, a set of process would have to take place (refer to adjoining figure to see the components involved

HDFS Federation and High availability

Before Hadoop 2 comes to the picture, Hadoop clusters were living with the fact that Name Node has placed limits on the degree to which they could scale.

Various Data compression codecs in Hadoop

Here we enlist and identify some common codecs that are supported by the Hadoop framework.

Storing Data in HDFS

Just to be clear, storing data in HDFS is not entirely the same as saving files on your personal computer. In fact, quite a number of differences exist — most having to do with optimizations that make HDFS able to scale out easily across thousands of slave nodes and perform well with batch workloads.

Hadoop Toolbox

Besides, the major contribution of Amazon EMR services and its other related tools, many other companies also provide certain useful Hadoop Tools.

Concept of Map Reduce in Hadoop

Though MapReduce as a technology is relatively new, it builds upon much of the fundamental work from both mathematics and computer science, particularly approaches that look to express operations that would then be applied to each element in a set of data. Indeed the individual concepts of functions called map and reduce come straight from functional programming languages where they were applied to lists of input data.

Hadoop Distributions: EMC, HotonWork and MapR

Besides Cloudera, there are few other popular Hadoop distribution which are well implemented for commercial and development purposes.

NEWSLETTER

Enter your email address here always to be updated. We promise not to spam!