Taking control of the control plane

After untangling the service mesh and understanding ETCD — The Easy Way, let us move on to the third article of this series. This time we will be focusing on a big chunk of a kubernetes cluster, the Control Plane.To put it simply, Control Plane is a se…


This content originally appeared on Level Up Coding - Medium and was authored by Vaibhav Rajput

After untangling the service mesh and understanding ETCD — The Easy Way, let us move on to the third article of this series. This time we will be focusing on a big chunk of a kubernetes cluster, the Control Plane.

To put it simply, Control Plane is a set of components used to plan, manage, schedule, and monitor other elements of a cluster. It is hosted on the master node(s) of a cluster from where it interacts with rest of the worker nodes.

So what are these set of components, which make a control plane?

The composition

A basic control plane has 4 essential components — etcd, kibe-scheduler, controller manager and api-server.

ETCD

An etcd, as explained in detail in the last blog, is a distributed key-value store that stores every necessary detail about the cluster. This data is necessary to maintain the state of a cluster and also acts as a central golden source for all kubectl get commands.

All operations performed on any resource aren’t considered complete until its status gets updated in the etcd.

kube-scheduler

A kube-scheduler, as the name suggests, schedules the containers on different nodes. It identifies the right node for every container depending upon the container’s requirement of resources like memory and CPU, node’s capacity to host the container, number of containers already running on the node and some other factors like node affinity, taints, and tolerations.

A scheduler first filters out the nodes which don’t have enough capacity to host the container. Then it ranks the left over nodes depending on how much resources will be left on them once the pod is scheduled on them. For example, let there be 4 node with CPU capacity of 3, 5, 10 and 8.

Now suppose a new container needs to be scheduled which needs 6 vCPUs. Now clearly the blue and the green node cannot handle it and hence gets filtered out.
To rank the remaining nodes, resources left after scheduling the container is computed. This gives pink node a value of 4 vCPUs and orange node a value of 2 vCPUs. Hence, pink node gets a higher rank than orange and if no other policies or rules blocks it, the container gets scheduled to the pink node.

If you setup your cluster using kubeadm, a kube-scheduler will already be present, running as a pod.

But if you’re setting up a cluster from scratch, you can download the binaries from the release page and extract to run it using these commands. Alternatively, you can write your scheduler which would schedule as per your stated logic. For this, you’ll have to interact with the binding API and set the value of nodeName using a POST call for the pods.

Controller-manager

For pretty much all the components and all major functions that are needed to maintain a kubernetes cluster, there is a controller in place.

To name them all— attachdetach, bootstrapsigner, cloud-node-lifecycle, clusterrole-aggregation, cronjob, csrapproving, csrcleaner, csrsigning, daemonset, deployment, disruption, endpoint, endpointslice, endpointslicemirroring, ephemeral-volume, garbagecollector, horizontalpodautoscaling, job, namespace, nodeipam, nodelifecycle, persistentvolume-binder, persistentvolume-expander, podgc, pv-protection, pvc-protection, replicaset, replicationcontroller, resourcequota, root-ca-cert-publisher, route, service, serviceaccount, serviceaccount-token, statefulset, tokencleaner, ttl, ttl-after-finished

That’s a lot of controllers and would need someone to manage them. This is done by a single process running in your cluster.

By default, it manages all the above state controllers except bootstrapsigner and tokencleaner controller, but if you are deploying a controller manager manually, you can choose specific controllers using the--controllers option as stated here.

So what’s the need for so many controllers?
A controllers continuously monitors the state of a given component and works to bring the state of the components to a desired state. Let’s take a couple of them for example
Node controller — It checks the status of nodes every 5 seconds (node monitor period). If a node doesn’t respond, then it waits for a grace period of 40 seconds for it to respond. After waiting for 40s if the node continues to be out of reach, then that node is marked as unreachable. Once marked unreachable, the node controller waits for another 5 minutes (eviction timeout) for the node to recover. If it fails to recover, then the pods hosted on this node which are part of a replica set, gets rescheduled onto other nodes
Replication controller — Monitors the status of pods in all replica sets. When the pods fall below the minimum limit due to unhealthy or dead containers, the replication controller issues a command to create new pods in order to maintains the desired number of pods.

kubeapi-server

Communication between all management components in a kubernetes cluster work based on a hub-and-spoke model and kubeapi-server sits at the center of it.

The kubeapi-server is used in a variety of ways to fulfill different functions. It is used by the external users to interact with the cluster through API wrappers like kubectl. It is also used by controllers to monitor the state of different resources. Worker nodes use kubelets to communicate with the master node and this communication is done through the kubeapi-server.

Quick fact check:
kubelet — A process running on every worker node which acts as a endpoint for all communication with the master node. It is also responsible for creating pods on the node as directed by the kube-scheduler

Generally all communication to the kubeapi-server happens on HTTPS (port 443) and have one or more form of authentication enabled. Ideally, nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the apiserver along with valid client credentials.
However, the following connection from the apiserver to the nodes, pods and services happen over plain HTTP.

The setup

Setting up the control plane is the first step of creating a kubernetes cluster. To setup a control-plane using kubeadm, just run

kubeadm init

You can optionally expose the control plane using a DNS provider or a load balancer using the --control-plane-endpoint option. Such a setup is used when you have multiple control plane nodes for high availability and resilience. To refer other options for custom configuration, check here.

When finished, this command will output a kubeadm join command which you should take note of as it will be used to join new nodes to the cluster.

And there you have it, your control plane is set up.

Parting note

With this, we are done with the third blog of this series. Hope it helped you understand the concept and working behind yet another component of a kubernetes cluster. Till next time!


Taking control of the control plane was originally published in Level Up Coding on Medium, where people are continuing the conversation by highlighting and responding to this story.


This content originally appeared on Level Up Coding - Medium and was authored by Vaibhav Rajput


Print Share Comment Cite Upload Translate Updates
APA

Vaibhav Rajput | Sciencx (2021-05-18T14:33:41+00:00) Taking control of the control plane. Retrieved from https://www.scien.cx/2021/05/18/taking-control-of-the-control-plane/

MLA
" » Taking control of the control plane." Vaibhav Rajput | Sciencx - Tuesday May 18, 2021, https://www.scien.cx/2021/05/18/taking-control-of-the-control-plane/
HARVARD
Vaibhav Rajput | Sciencx Tuesday May 18, 2021 » Taking control of the control plane., viewed ,<https://www.scien.cx/2021/05/18/taking-control-of-the-control-plane/>
VANCOUVER
Vaibhav Rajput | Sciencx - » Taking control of the control plane. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2021/05/18/taking-control-of-the-control-plane/
CHICAGO
" » Taking control of the control plane." Vaibhav Rajput | Sciencx - Accessed . https://www.scien.cx/2021/05/18/taking-control-of-the-control-plane/
IEEE
" » Taking control of the control plane." Vaibhav Rajput | Sciencx [Online]. Available: https://www.scien.cx/2021/05/18/taking-control-of-the-control-plane/. [Accessed: ]
rf:citation
» Taking control of the control plane | Vaibhav Rajput | Sciencx | https://www.scien.cx/2021/05/18/taking-control-of-the-control-plane/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.