Latest article tagged on "Hadoop"

It has become a known fact that big data has been hogging the limelight in many sectors in the recent times. Such is the powerfulness of big data that 79% of executives feel that companies may lose their competitive position and might even end up in extinction if they do not embrace big data, according to a study by Accenture.

We have already examined dos and don’ts of Row Keys and Column Family with details in our previous post.

Zookeeper enables coordination and synchronization with what we calls “znodes”, which are presented as a directory tree and resemble the file path names we would see in a Unix file system.

In our pervious parts of this series named under “HBase Architecture” we have seen RegionServers,

In HBase Architecture: Part-1, we Started our discussion of architecture by describing RegionServers instead of the MasterServer may have surprised you.

Previously, we have seen RegionServers and learn how regions work. Now here we examine compactions.

Previously we are introduced to the HBase architecture and examine the most basic component of the architecture that is RegionServer.

HBase data stores comprises of one or more tables, that are indexed by row keys. Data is stored in rows with columns, and rows can have multiple versions.

As we might have guessed, the Google’s BigTable distributed data storage system(DDSS) was designed to meet the demands of big data. Now, big data applications store massive amount of data but big data content is also often variable.

I think everybody remember his first surfing experience on the World Wide Web,; we just knew that it was an incredible innovation for the IT industry.

Selecting a database which fits in our application requirement is a very daunting task since “no one size fits all”.

Pig Latin has a simple syntax with powerful semantics we will use to carry out two primary operations:

Developers and Programmers are still continue to explore various approaches to leverage the distributed computation benefits of MapReduce and the almost limitless storage capabilities of HDFS in intuitive manner that can be exploited by R.

Machine learning refers to a feild of artificial intelligence (A.I.) functions that provides tools enabling computers to enhance their analysis on the basis of previous events.

MapReduce API has two main classes which do the Map-Reduce task for us: Mapper class and Reducer class.

Simple” often sense as “elegant” when it comes to those remarkable architectural drawings for that new Silicon Valley mansion we have planned for when the money starts rolling in after we implement Hadoop.

With the release of Hadoop 2, however, YARN was introduced, which open doors for whole new world of data processing opportunities.

Mapper class is responsible for providing implementations for mapping jobs in MapReduce.

HDFS is one of the two main components of the Hadoop framework; the other is the computational paradigm known as MapReduce.

In HDFS, the Data block size needs to be large enough to warrant the resources dedicated to an individual unit of data processing On the other hand.

Hadoop has gone through some big API change in its 0.20 release, which is the basic interface in the 1.0 version .

As we already know that in Hadoop, files are composed of individual records, which are ultimately processed one-by-one by mapper tasks.

Some amount of data volume that ends up in HDFS might land there through database load operations or other types of batch processes.

Whenever a user tries to stores a file in HDFS, the file is first break down into data blocks, and three replicas of these data blocks are stored in slave nodes (data nodes) throughout the Hadoop cluster

As we already know now that HDFS is a journaled file system, where new changes to files in HDFS are captured in an edit log that’s stored on the NameNode in a file named edits.