Container orchestration using Kubernetes

Container orchestration is the automated arrangement, coordination, and management of software containers.

What is container orchestration used for?
Container orchestration is used to manage:
  • Deployment and provisioning

  • Scheduling and configuration

  • Resource allocation

  • Balancing workloads

  • Traffic routing

  • Container health management

  • Configuring applications

  • Keeping interactions between containers secure
If your current software infrastructure looks like this — maybe Apache + PHP/Python/Ruby/Node.js app running on a few containers that connect to a replicated DB — in such conditions, you will not require container orchestration, you can probably manage everything yourself.
What if your application keeps growing? And you keep adding functionalities. It becomes a vast monolith, and it becomes difficult to maintain and takes away too much CPU and RAM. Later, you finally decide to split your application into smaller chunks, each responsible for one specific task, supported by microservices.

Refined infrastructure now kind of looks like this:

You will now need a caching layer or a queuing system — to increase performance, process tasks, and share data between the services. In situations where you also might want to run multiple instances of each microservice spanning numerous servers to make it highly available in a production environment.

You now have to think about challenges like:
  • Service Discovery

  • Load Balancing

  • Secrets/configuration/storage management

  • Health checks

  • Auto-[scaling/restart/healing] of containers and nodes

  • Zero-downtime deploys
Container orchestration platforms are beneficial and powerful because they offer solutions for most of those challenges.

So what choices do we have? Today, the leading players are Kubernetes, AWS ECS, and Docker Swarm. Kubernetes has the largest community of users and is the most popular by a significant margin.

Kubernetes, an open-source platform for automated deployments, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure.
It addresses all the difficulties mentioned above; it can work well on most cloud providers, bare-metal, hybrids, or a combination of all of the above). It is easily configurable, modular, and does features like auto-placement, auto-restart, auto-replication and auto-healing of containers exceptionally well.

Main components of Kubernetes:
  • Cluster: A control plane and one or more computing machines, or nodes.

  • Control plane: The collection of processes that control Kubernetes nodes. This is where all task assignments originate.

  • Kubelet: This service runs on nodes and reads the container manifests and ensures the defined containers are started and running.

  • Pod: A group of one or more containers deployed to a single node. All containers in a pod share an IP address, IPC, hostname, and other resources.

These are some of the main reasons we use Kubernetes for building, scaling, deploying, and managing enterprise applications. Above all, it enables us to significantly reduce our infrastructure costs by letting them run multiple applications on a single OS and server. It intelligently packs applications into various containers based on the available resources. It automatically scales the application as per the business needs to drive maximum productivity out of limited resources.

No comments:

Powered by Blogger.