The architecture of Kubernetes and Components, Installation and Configuration.#kubeweekday-1

The architecture of Kubernetes and Components, Installation and Configuration.#kubeweekday-1

What is Kubernetes?

Kubernetes Architecture

Kubernetes components

Master node

Worker node

Kubernetes Installation and Configuration

Kubernetes API server

ETCD

Kubernetes-scheduler

Kube-controller-manager

Kubelet

What is Kubernetes?

  • In organizations, multiple numbers of containers run on multiple hosts at a time. So it becomes very hard to manage all the containers together, a simple solution to this would be Kubernetes.

  • Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications, developed by Google.

  • It provides a Highly scalable, High Availability, High Performance, Disaster Recovery, and fault-tolerant architecture that can run on a wide range of cloud platforms and data center infrastructures.

In this article, we will discuss the architecture and components of Kubernetes, and how to install and configure Kubernetes on a single node.

Kubernetes Architecture

  • Generally, Kubernetes follows one Master and one or more worker node architecture.

1) In the Kubernetes architecture diagram above you can see, there are one or more master and multiple nodes. One or master is used to provide high availability.

2) The Master node communicates with Worker nodes using Kube API-server to kubelet communication.

3) In the Worker node, there can be one or more pods and pods can contain one or more containers.

4) Containers can be deployed using the image also can be deployed externally by the user.

Kubernetes components:

Kubernetes architecture comprises several components that work together to manage containerized applications. Here is a brief overview of each component:-

Kubernetes Master Node:

Master Node is a collection of components like Storage, Controller, Scheduler, API-server that makes up the control plan of the Kubernetes. When you interact with Kubernetes by using CLI you are communicating with the Kubernetes cluster’s master node. All the processes run on a single node in the cluster, and this node is also referred to as the master.

  • The master node is the control plane of the Kubernetes cluster, responsible for managing the overall state of the cluster. It consists of the following components:

  • The master node manages to schedule the pod, monitor,re-schedule / restart the pod, and join a new node.

Master Node Components:-

The Master node has four components. Let's understand step by step:-

1) Kube API-server: It performs all the administrative tasks on the master node. A user sends the request commands as YAML/JSON format to the API server, then it processes and executes them. The Kube API-server is the front end of the Kubernetes control plane.

  • whenever the user wants to deploy the application on a cluster in Kubernetes needs to interact with the API server using some client. the client can be UI like Kubernetes dashboard on command tool like Kubectl.

  • So API server is like a cluster gateway from which it gets requests or queries about scheduling pods, deploying new applications, creating new services etc.,

  • It works like when a user requests to schedule a new pod, the API server checks the request, validates it and forwards it to the scheduler.

2) Scheduler: After validating the request from the API server, the request will go to the scheduler. The scheduler will check which worker node has well-available resources to run this pod. it checks the resources like CPU and RAM which are needed to run the pod. In short, it decides on which node the new pod should be scheduled.

The scheduler calculates the amount of resources that would be free on the nodes after placing the pod on them. In this case, the one on the right would have 6 CPUs free if the pod was placed on 16 CPUs, which is 4 more than the other one so it gets a better rank.

3) Controller Manager: The controller manager detects state changes in the cluster like pod crashes or dies. so it will request the scheduler to reschedule those pods and creates another one.

There are many more such controllers available within Kubernetes.

4) ETCD: It is a distributed reliable key-value store that is simple secure and fast. It is the cluster brain and stores information about cluster data. When the pod get died or is scheduled all this information is saved or updated in etcd cluster in a key-value store format. It stores only cluster data, not application data.

Worker Node Components:-

Three services must install on the worker node to work the Kubernetes cluster function properly. Each node has multiple pods on it. The services or components of the worker node are as below-

1) Kubelet:- Kubelet is responsible for managing the pods on the nodes. So if the pod is not running or dies kubelet will take care of it.

The kubelet is like the captain of the ship (worker node). They lead all activities on a ship, and are the sole point of contact from the master ship (master node).

The responsibilities of the kubelet are as follows:

  • Doing all the paperwork necessary to become part of the cluster.

  • Load/unload containers on the ship as instructed by the scheduler on the master.

  • Send back reports at regular intervals on the status of the ship and the containers on them.

  • Register the node with the Kubernetes cluster.

  • When a kubelet receives instructions to load a container or a POD on the node, it requests the container run time engine, which may be Docker, to pull the required image and run an instance.

  • The kubelet then continues to monitor the state of the POD and the containers in it and reports to the kube-api server on a timely basis.

2) Kube-proxy:- It takes care of the networking part in Kubernetes. It can be allocating IPs to POD for communication. Kube-proxy is a process that runs on each node in the Kubernetes cluster.

The kube-proxy is responsible for implementing the Kubernetes networking model on the node. It maintains network rules and ensures that communication between pods and services is routed correctly.

3) Container Runtime:- The container runtime is responsible for managing the containers. It pulls the container images from the registry, starts and stops containers, and manages container storage.

4) What are K8s Pods?

  • Kubernetes pods are the foundational unit for all higher Kubernetes objects.

  • A pod hosts one or more containers.

  • It can be created using either a command or a YAML/JSON file.

  • Use kubectl to create pods, view the running ones, modify their configuration, or terminate them. Kuberbetes will attempt to restart a failing pod by default.

  • If the pod fails to start indefinitely, we can use the kubectl describe command to know what went wrong.

Kubernetes Installation and Configuration :

Let's see how to install Kubernetes on AWS. We need to make a cluster so we will use kubeadm installation.

We will create two EC2 instances one is a Master and another is a worker.

For Master Node we need the:

  • t2.medium

  • 2-core CPU

  • 4GB RAM

For Worker Node we need the:

  • t2.micro

Launch Master Node Instance

  1. Select Ubuntu AMI

  2. Select an instance type t2.medium

  3. Select keypair

  4. Allow HTTP/HTTPS ports

Launch Worker Node Instance

  1. Select Ubuntu AMI

  2. Select an instance type t2.micro

  3. Select keypair

  4. Allow HTTP/HTTPS ports

Now we need to install docker on both the Master and Worker nodes.

sudo apt update -y
sudo apt install docker.io -y

Now we have to start docker on both Instances.

sudo systemctl start docker
sudo systemctl enable docker

To set up Kubernetes we need kubeadm.

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg 
#it pull the authentic key from kubernetes
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
#signed the key in kubernetes location for secure authentication

Update the package on both machines.

sudo apt update -y 
#Master node we need to install kubeadm

Now we will install kubeadm on Master Node and Worker Node

sudo apt install kubeadm=1.20.0-00 kubectl=1.20.0-00 kubelet=1.20.0-00 -y
#install on both master and worker node

Master node Configuration-

sudo su
kubeadm init #this command install all the componets(kube controller-manager,kube scheduler, kube api-server and etcd) in master-node

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf
# in this location cluster information are stored

You should now deploy a pod network to the cluster.

kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml
#master-node set-up completed

With the help of the below command any worker node that has a token can join the cluster.

kubeadm token create --print-join-command

It will generate like below:

kubeadm join <x.x.x.x Private ip>:<Port No.> --token --discovery-token-ca-cert-hash

Then you can join any number of worker nodes by running the following on each as root:

We need to allow a port in the security group that is mentioned in the above tokens.

Now we will go to the worker node.

Worker Node Configuration-

sudo su
kubeadm reset pre-flight checks
#don't run kubeadm init command in worker node

Now run the token command:

kubeadm join <x.x.x.x Private ip>:<Port No.> --token --discovery-token-ca-cert-hash --v=5

kubeadm join 172.31.23.99:6443 --token 5ekybb.52vik3ulgtl11fpf     --discovery-token-ca-cert-hash sha256:0b8091f5b297b6b9ff9f9bf9ee019869b27f13636afffdd179dddec2642b0d12 --v=5

Now run the below command on Master Node

kubectl get nodes

if you stop the server and start again then you run the command as

export KUBECONFIG=/etc/kubernetes/admin.conf
# in master node

Thanks for reading this Article.