KUBERNETES - WHY, WHAT AND HOW?
The webinar hosted by Aritha on the topic of “Kubernetes – Why, What and How?” was presented by Rajith Noojibail, an engineering leader who has lead large software development teams to build and deploy software with microservices architecture. Here is a brief summary of the insightful session
Evolution of Servers
The session started with a story on Containerisation journey which detailed on several features of Evolution of servers.
Starting with the age of Physical servers, provisioning used to take long time. Typically, huge monolithic applications were installed and involved months of work where we get into server to start, configure, restart, etc. These are mutable servers. Not agile and not highly available. This needed other physical servers running in parallel for disaster recovery. Resource utilisation was not effective, deployments were manual, procurement and maintenance were expensive, and by no stretch they were suitable for small companies. These were the servers of the past where one application used to run on each physical machine! In speaker’s analogy they were much like dinosaurs!
Then came the era of Virtual servers where smaller units of OS were hosted on single server using Virtual Machines (VMs). Like pets, you provision them, name them and keep them with you. Multiple virtual machines on a physical server and ability to run multiple smaller applications were the most transformational features. You can deploy your apps once or twice a month, which means this is little more agile than before and system is highly available. But again, this was high on operations. These virtualised machines had controlled utilisation of machine, had automated deployments, were faster and less prone to failures. Here the servers were not as expensive as the hardware is shared with other services. These servers are easier to scale by adding VMs. One application can be run on every VM and hence we have more applications running on a physical machine.
We are now in the era of containerisation! Servers are not named anymore but numbered, much like cattle in a cattle-yard, as there are many of them. We have containers in numbers like 10, 20, etc running on a single machine. Smaller apps are replaced with micro services which are very tiny apps where business logic is broken into smaller chunks. These can be dismantled or altered as required. Mutable infrastructure is turning immutable. This means you don’t install anything, and everything is prepackaged. If you want to replace an old package, then you delete and a new one comes up. This is extremely agile as it enables parallel development and hundreds of deployments in a day! It is highly available and low on operations. It is fully utilised. With automated deployments and pipelines, applications can be destroyed and restored without disrupting traffic to the application. Containerised servers are also inexpensive. You can scale your containers easily with auto scale and one application runs on each container.
What is Kubernetes
It is an open-source system for automating deployment, scaling and management of containerised applications.
- Kubernetes can be run anywhere, on public or private cloud, on premises or on your laptop
- The containerised feature means it is light, standalone and packaged to run everything from development to production
- Automating refers to the orchestration and declarative abilities with no interventions required after setting up once. You can declare the qualities in a configuration file, defining the container software, and the scale at which it is required to run. Kubernetes orchestrates and gets to that state to satisfy what you have declared
Components of Kubernetes
A brief explanation of different components in Kubernetes under Control plane & Worker nodes was provided in the session. Rajith detailed on concepts like Pods, Namespace, Workloads, deployments, replicasets, jobs, daemonsets, statefulsets, cronjobs, custom resource definition, Services, ingress & ingress controllers, persistent volumes, persistent volume chain, storage class, config maps and secrets.
He explained how traffic gets routed to the cluster. When you expose an app as a pod, you are using the deployment to hold the pods together. Typically have a service which is interfacing at a network level as a network service which can route traffic to your pods. The ways to expose the app to outside world through ingress with load balancers was also explained.
How to get started with Kubernetes?
EKS on AWS, GKE on Google Cloud platform and AKS on Azure are some popular options available if you want to use the cloud. This would cost typically about $900 a year to start. This could be high for small apps. If you don’t want to use the cloud then you can use KubeAdm which can be run on your own machine and needs about 3 master nodes at least to keep them highly available.
Six Steps for a quick start in Kubernetes
- Spin up a cluster – which can take about 15 min
- Add worker nodes
- DOckerize your app
- Create yaml files for the application
- Apply yaml files
- Configure DNS records to point to load balancer
Additional Tools in Kubernetes
Some additional tools available in Kubernetes and are popularly used in different parts of the lifecycle like Helm, Argo, Prometheus, kustomize, Minikube, Kube-state-metrics, Dashboard, Terraform.
In the last part of the webinar, Rajith presented a demo of running a service on local machine using Kubernetes.
Some interesting questions answered during the session:
- Should DevOps follow Kanban way of creating infrastructure?
- Any use case of using Kubernetes vs other container services, like Docker or Red hat Openshift?
- How well is Kubernetes incubated on Azure and GCP?
- What is the difference between deployments and replicasets?
In this webinar, the speaker Jayaprakash Puttaswamy (JP), shared his experience on role of managers in developing technical agility of their teams.