fbpx
Unlock the power of choice with CloudPlex and DigitalOcean Get CloudPlex free for 3 months, and Digital Ocean for 1 month

Kubernetes 101 Part 1/4: Architecture overview

tutorial

Overview

Kubernetes is a powerful set of DevOps tools that helps you deploy your code in a reliable, scalable way. You’ve probably already heard of Kubernetes, along with associated technologies such as containers and microservices. In this set of tutorials, we’ll take a deep dive on Kubernetes and learn what it’s capable of through hands-on tutorials.

The Kubernetes Cluster Architecture

Kubernetes runs on nodes, which are individual machines within a larger Kubernetes cluster. Nodes may correspond to physical machines if you run your own hardware, or more likely they correspond to virtual machines running in the cloud. Nodes are where your application or service is deployed, where the work in Kubernetes gets done.
While the nodes do the work, Kubnetes also provides a sophisticated mechanism for managing the nodes and ensuring they are in the correct state. This is called the control plane. The control plane is where you will do most of your interactions with your Kubernetes cluster. When you want to deploy an application, get information about the health of your cluster, or change the configuration, you do so by interacting with the control plane.
In the rest of this tutorial, we’ll take a closer look at what makes up the main components of a cluster by examining the internals of the control plane’s master node, as well as what the worker nodes look like.

The Master Node

The master node is the heart of the control plane. On the one hand, it’s a node like any other in the cluster, which means it’s just another machine or virtual instance. On the other hand, it runs the software that controls the rest of the cluster. It sends messages to all the other nodes in the cluster to assign work to them, and they report back via an API it runs.
At the heart of the master node software is the API Server. This API is the only endpoint for communication from the nodes to the control plane. The API Server is where the nodes and the master communicate about the status of pods, deployments, and all the other Kubernetes API objects.
Information about the current state of those objects is stored in etcd, which is a high-availability key-value store. Etcd is the source of truth of the cluster and is designed to scale effectively with cluster size so that high-volume API updates coming from the worker nodes can efficiently be converted into updates about the system.
Often, those updates require changing the state of the cluster. For example, you may send a request to deploy a new version of your code to the API Server. That request is validated by the API Server and then the updated state is set in etcd. Finally, the update must be performed. That is the job of the Controller Manager. The Controller Manager is a background daemon that continually monitors the state of the system in a loop. When new change request comes in, it converts that into the series of operations that must be performed on the cluster in order to bring about the desired state.
Finally, the Scheduler is the part of the master node that is responsible for deploying pre-defined services on the worker nodes and managing their uptime. The Scheduler looks for pods (which we’ll discuss next time) that have been defined but not assigned yet, and finds a way to assign them to worker nodes.

The Worker Nodes

As the name implies, worker nodes do the real work in Kubernetes. When you deploy containers or pods in your application, you’re deploying them to be run on the worker nodes. Workers have the resources to run one or more containers, and the manager tells them which ones to run.
Nodes run a process called kubelet which is the primary way work gets assigned to the node. When it starts up, kubelet registers with the manager to indicate the node is ready to do work. Then, kubelet periodically checks with the manager, typically via the manager’s API Server, to see if there are pods assigned to this node. When there is, it pulls the PodSpec for those pods, which tells the worker node what to run.
In Kubernetes, work is done by containers. Containers are self-contained environments for executing code. The most well-known type of container is Docker, but Kubernetes can be configured to work with many existing container technologies. This is set in the container runtime, which tells the worker which type of container you’ll be using.
Because a worker node may be running several containers that host services on different ports, the final component of a worker node is the kube-proxy service, which directs traffic to the nodes. The proxy service uses port definitions in the container definitions to choose ports for them to expose, and send traffic along. It can also do some basic load balancing, such as round-robining traffic.
Start building app

Start building your Kubernetes application

84620cookie-checkKubernetes 101 Part 1/4: Architecture overview