Kubernetes Architecture – A Brief Explanation
Kubernetes is an open source platform developed by Google. It manages containerized workloads/applications and provides features like automatic deployment/rollback, auto-scaling, auto-restart, auto-replication etc. Kubernetes is portable and supports famous public/private cloud platforms like OpenStack, Azure, AWS, GCP etc.
Following figure shows the Kubernetes architecture.
Kubernetes has one master node which manages the kubernetes cluster and it has following functionalities:
1) Communication (i.e. API Server to facilitate the communication between different Kubernetes components)
2) Scheduling (i.e. Scheduler, designs the pod to run on specific node)
3) Controllers (i.e. Controller Manager executes background threads that run tasks in a cluster. Some of its other roles are:
Some key concepts/components of Kubernetes are as follows:
A POD is a smallest unit of deployment in kubernetes. PODs can hold multiple containers. The containers in one particular POD, share the same resources and local network. The kubelet process are responsible to maintain the POD state and health. Worker nodes can be exposed to the internet via load balancer and the traffic coming into the nodes is handled by the kube-proxy, which is how an end user ends-up talking to an application.
“etcd” is distributed key value store for cluster data i.e. state info, job scheduling, POD details etc.
On a worker node, the Kube-proxy functions as load balancer and network proxy.
The Docker daemon allows to run application POD containers.
Worker nodes can be physical or virtual machines where the application operates. Worker nodes communicates with the master node by “Kubelet” Process. “Kubelet” agent talks with API server to validate if the POD assignment to the node, has been done. Containers of POD executes by the container engine. It maintains the PODs and node’s states and communicates back to the master node.