articles

Home / DeveloperSection / Articles / HBase Architecture: ZooKeeper and HBase Reliability (Part-5)

HBase Architecture: ZooKeeper and HBase Reliability (Part-5)

Jayden Bell3360 10-May-2016

In our pervious parts of this series named under “HBase Architecture” we have seen RegionServers, Regions and how regions can manage data reads and writes in HFile object with the help of Block Cache and Mem Store.  After this we learn the conceptsof compactions i.e. minor and major compactions, we have also examine the role and duties of Master Node. So after covering all these key components, one crucial component is left behind, as you people might have guessed already (refer Hbase Architecture diagram in the adjoining figure), The ZooKeeper”.

ZooKeeper

So in this part, we will see what we can do with Zookeeper. So let’s start with the original HBase architecture and observe where zookeeper fits in the picture?


HBase Architecture: ZooKeeper and HBase Reliability (Part-5)


Zookeeper is a distributed cluster of servers that collectively provides reliable coordination and synchronization services for clustered applications. Admittedly, the name “Zookeeper” may seem at first to be an odd choice, but when we understand what it does for an HBase cluster, we can see the logic behind it. When we are building and debugging distributed applications “it’s a zoo out there,” so we should put Zookeeper on our team.

 

HBase clusters can be huge and coordinating the operations of the MasterServers, RegionServers, and clients can be a daunting task, but that’s where Zookeeper enters the picture. As in HBase, Zookeeper clusters typically run on low-cost commodity x86 servers. Each individual x86 server runs a single Zookeeper software process (hereafter referred to as a Zookeeper server), with one Zookeeper server elected by the ensemble as the leader and the rest of the servers are followers. Zookeeper ensembles are governed by the principle of a majority quorum. Configurations with one Zookeeper server are supported for test and development purposes, but if we want a reliable cluster that can tolerate server failure, we need to deploy at least three Zookeeper servers to achieve a majority quorum.

So, how many Zookeeper servers will we need? Five is the minimum recommended for production use, but we really don’t want to go with the bare minimum. When we decide to plan our Zookeeper ensemble, follow this simple formula:

                                         2F + 1 = N


Where, F is the number of failures we can accept in our Zookeeper cluster

And, N is the total number of Zookeeper servers we must deploy.

Five is recommended because one server can be shut down for maintenance but the Zookeeper cluster can still tolerate one server failure. The real magic behind ZooKeeper is driven by “znodes”, we well talk znodes and its working in our next and final post under HBase Architecture. 

HBase Reliability

HBase provides a high degree of reliability. When configured with the proper redundancy (a backup MasterServer, proper Zookeeper configuration, and sufficient RegionServers), HBase is sometimes considered fault tolerant, meaning that HBase can tolerate any failure and still function properly. This is not exactly true, of course, since (for example) a cascading failure could cause the cluster to fail if the Zookeeper ensemble and or the MasterServers all failed at once. When thinking about HBase and fault tolerance, remember that HBase is a distributed system and that failure modes are quite different in distributed systems versus the traditional high-end scalable database server in a high availability (HA) configuration. To understand HBase fault tolerance and availability in more detail we need to consider the CAP theorem which we introduce in our post, Big Data: CAP Theorem. Remember, HBase provides “Consistency” and “Partition Tolerance” but is not always “Available.” For example, we may have a RegionServer failure and when we do, the availability of our data may be delayed if the failed RegionServer was managing the key (or keys) we were querying at the time of failure. The good news is that the system, if configured properly, will recover (thanks to  our Zookeeper and the MasterServer) and our data will become available again without manual intervention. So HBase is consistent and tolerant of network failures but not highly available like traditional HA database systems.


Updated 07-Sep-2019

Leave Comment

Comments

Liked By