An Intro For Beginners: What is Kubernetes & How to Get Started With It

Originally developed by Google, Kubernetes is an open-source system for the management of containerized apps in clustered environments. It seeks to improve how distributed, related components and services are managed across a wide range of infrastructure. Kubernetes provides scalability, predictability and high availability. 

It’s hard to understand why Kubernetes is so powerful if you don’t have a sense of how it’s organized and designed. In a nutshell, Kubernetes architecture comprises multiple layers with each upper layer abstracting the degree of complexity that’s present in the layers below it.

At the lowest levels, Kubernetes leverages a shared network to bring together multiple virtual or physical machines that communicate with each other. It’s on this foundation that all of Kubernetes workloads, capabilities, and components are configured.

Each machine in a cluster is assigned a role. One server (or several servers where there’s a need for high availability) acts as the master server. It’s the brain and gateway. The master server exposes APIs for clients and users, checks the health of other servers, decides how to schedule work and orchestrates overall communication between the different components of the cluster.

The master server is responsible for the majority of the logic that Kubernetes provides. The rest of the machines function as nodes i.e. servers that accept and run workloads using internal and external resources. There’s a lot to learn about Kubernetes but since we can only cover the absolute basics here, we will take a closer look at the components of the master server and nodes.

A. Master Server Components

The master server is the principal control plane in Kubernetes. It’s the main touch point for users and administrators. The master server has several components. The components may be installed on just one machine or be distributed across multiple master server machines.

An Intro For Beginners: What is Kubernetes & How to Get Started With It


1. Etcd

The etcd project is a distributed, lightweight key-value store developed by CoreOS. It can be configured for distribution across several nodes. Kubernetes relies on etcd as a repository for configuration information that is accessible to the different nodes of the cluster. This is useful for service delivery.

It also gives components the ability to reconfigure themselves in accordance with changing information. Etcd helps maintain the state of the cluster with features such as distributed locking and leader election. Thanks to the simple and easy to deploy HTTP/JSON API, the interface for retrieving or setting values is fairly straightforward.

2. Kube-apiserver

The API server is among the most fundamental services of the master server. It’s the principal tool for managing the cluster allowing administrators and users to configure organizational units and workloads.

The kube-apiserver is responsible for ensuring deployed containers services details as well as the etcd store are always in harmony. It’s the bridge between components that facilitates information dissemination and maintains cluster health. The kube-apiserver adopts a RESTful interface which allows libraries and tools to seamlessly communicate with it.

3. Kube-controller-manager

The kube-controller-manager is a generic service with numerous responsibilities. It’s main purpose though is to manage the various controllers that oversee the state of the cluster, workload cycles and routine tasks. For example, the replication controller makes sure that the count of replicas for a pod are consistent with the number deployed on the cluster.

The specific details of each operation are stored in the etcd where the kube-controller-manager keeps an eye on any changes via the API server. Whenever a change is noted, the kube-controller-manager examines the new information and executes the procedure that’s going to deliver the desired state.

4. Kube-scheduler

The kube-scheduler is tasked with the actual assignment of workloads to the different nodes of the cluster. It reads the operating requirements of a workload, examines the existing infrastructure environment and then assigns work to the best suited node(s).

The scheduler tracks the available capacity of every host in the cluster to ensue workloads do not exceed available resources. The kube-scheduler must therefore always know what the total capacity of the cluster is and what resources have already been assigned to existing workloads.

5. Cloud-controller-manager

The cloud-controller-manager is the adhesive that ensures Kubernetes can seamlessly interact with providers of diverse APIs, features and capabilities without losing its internal generic constructs.

Through this, Kubernetes can constantly recalibrate its state information based on the data acquired from cloud providers, adjust cloud resources as needed, and create new cloud services to satisfy the requirements of work received by the cluster.

An Intro For Beginners: What is Kubernetes & How to Get Started With It


B. Node Server Components

Node servers comprise a number of components that are vital for running assigned workloads, configuring app container networking, and communicating with the master server.

1. A Container Runtime

The container runtime is the first and most important element of a node. The most popular is Docker but there are alternatives such as runc and rkt.

The container runtime starts and manages containers i.e. applications isolated in a lightweight environment. Every unit of work in a cluster is implemented as one or more deployed containers. Each node’s container runtime runs the containers based on the workloads received by the cluster.

2. Kubelet

The kubelet is a small service that’s the main contact for nodes in the cluster. It relays data to and from control services and interacts with the etcd to write values or read configuration details. The kubelet service interacts with the master components in order to receive work and commands as well as authenticate the cluster.

Works comes in the form of manifests which detail both the operating parameters and the workload. The kubelet thereafter takes on the role of maintaining the work on the node. It manages the destruction and launch of containers via the container runtime.

3. Kube-proxy

The kube-proxy is a proxy service that manages host subnetting while making services available to the other components of the node. It forwards each request to the right container, ensures basic load balancing and is responsible for keeping the network environment accessible, predictable, and where appropriate, isolated.

Kubernetes is a fun project that gives users the capability to run scalable containerized workloads on a highly abstracted and available platform. Whereas its architecture and internal components may seem complex at first, the flexibility and resilience you enjoy once you master Kubernetes is arguably unparalleled among open-source configurations.

  Modified On Jun-03-2020 03:29:35 AM

Leave Comment