What Is Kubernetes: Container Orchestration & Automation
Kubernetes is an open-source platform for managing containers. You don’t create containers with Kubernetes. Instead, you use another service to create the containers (like Docker) and then use Kubernetes to orchestrate them.
Coming from Google, the portable, extensible Kubernetes platform gives you greater control over containerized workloads and services. It also requires less oversight. Automation is the core functionality of Kubernetes as it can start, scale, and sleep workloads and services as needed. As such, Kubernetes is the overseer for many containerized web apps and microservices.
Developers use and recommend Kubernetes when they need to build a highly available system with zero downtime and many users. Other competitors in the container/workload orchestration niche include Docker Swarm, Amazon ECS, and Apache Mesos. However, Kubernetes remains the most dominant platform of the bunch for its history, pedigree, portability, and platform agnosticism.
History
In the early 2000s, Google first began working on the issues of cluster management for its thousands of applications. The problem was hundreds of thousands of jobs needed access to computing resources across tens of thousands of machines. Google’s original solution, known as the Borg System, launched with the 2004 redesign of Google search. Borg made Google’s task packing more efficient while also reducing fault-recovery times.
Over the course of a decade, engineers at Google improved and iterated on Borg’s functionality until it could automate the management and scaling of all Google’s online applications. In mid-2014, Google announced they’d release Kubernetes as an open source version of Borg. By July 2015, Kubernetes v1.0 was ready for release.
Along with the release, Google and the Linux Foundation partnered to create the Cloud Native Computing Foundation (CNCF) to lead the emerging community of companies and developers working with containers and microservices. Upon open sourcing the platform, Google handed over maintenance of Kubernetes to the CNCF.
Since its release, Kubernetes has proven to be widely successful. It’s home to a large, rapidly growing ecosystem of resources. In fact, as of 2018, the Kubernetes Project reached ninth place in commits at Github, and second place in authors and issues, placing it just below the Linux Project in community contributions.
What Kubernetes Does
Kubernetes is a Greek word, κυβερνήτης. It means “governor,” “helmsman,” or “captain.” The aptly-titled Kubernetes platform does just that. It governs and directs resource usage in a containerized application.
Kubernetes receives incoming workloads from containers and orchestrates the computing, networking, and storage infrastructure necessary for those workloads to run. If you have a physical machine or group of machines that your application uses, you can use Kubernetes to manage those machines’ resources. More often, however, Kubernetes manages virtual machines and cloud resources, scaling storage, memory, and bandwidth requirements in response to changes in loads.
The nice thing is Kubernetes isn’t tied to any other project. It’s infrastructure and container agnostic. Docker Swarm is built to work with Docker containers and Amazon ECS is built for AWS deployment. But with Kubernetes, you can use it across infrastructure providers and with any container that supports Open Container Initiative standards. This makes Kubernetes both simple and flexible to use and is responsible for Kubernetes popularity over other platforms.
How Kubernetes Works
Typically, when we talk about governing resource usage, we’re talking about hardware-level platforms that scale access to new resources. However, Kubernetes is not a traditional platform-as-a-service system. Instead, it addresses the problem of resource management from the other side, at the container level.
On the infrastructure level, Kubernetes connects with a cluster of machines. Those machines could be real, virtual, cloud, or a mix of all three. Kubernetes breakthrough is how it connects containerized applications to the machines that run them.
- Kubernetes connects with a cluster of machines
- A Controller Plane forms the backbone of the Kubernetes platform between the machines and the containerized application
- Workloads get scheduled to the machine cluster via an API server on the Controller Plane
- In addition, the Controller Plane includes a Controller Manager that can create, update, or delete resources as needed
- The workloads themselves live inside Nodes. These Nodes hold the state for the workload and all containers in that Node
- Within each Node are one or more Pods where containers live with their applications and libraries
So, each container in your application becomes encapsulated in a Pod. That Pod lives inside a Node, alongside any other containers (Pods) that need access to the same state information. Workloads pass from Node to the API Scheduler on the Controller Plane to get processed and passed back.
Kubernetes is concerned with creating, managing, and closing the pods and nodes that make up your application as loads require. It creates an abstraction layer that allows your application to be a persistent entity, even as the pods that run the containers are constantly changing. As soon as your application needs access to a container, Kubernetes creates a pod for it. However, when that container isn’t actively needed, Kubernetes shuts down the pod to minimize idle resource usage.
Kubernetes is a very powerful tool, but it’s also known for its steep learning curve. Setting up your first Kubernetes installation is sure to be a massive learning experience. Luckily, there’s a large user community to help you along the way.
Additionally, for each application you run using containers and Kubernetes, you’ll need to create a map for Kubernetes to understand how the parts of the application relate to one another. For many common application types, these maps already exist and you can use an open source map to get up and running quickly. However, if you have a complex or novel application architecture, you’ll need to set aside some time to map it out and optimize the best way for Kubernetes to orchestrate that app’s containers.
What You Can Do with Kubernetes
The good news is once you’ve invested the up-front time telling Kubernetes how to compose an application, it can handle the nitty-gritty of actually rolling out the containers and keeping them in sync without you needing to do anything.
Kubernetes Labels allow developers to organize and categorize their resources however they want. Annotations mean you can add custom information fields that make workflows run easier. They provide a simple way for the processes and tools within Kubernetes to checkpoint state and keep everything synchronized in the application.
Once that application is live, Kubernetes handles everything you need to do to scale that app as well. As long as you have access to the necessary resources, Kubernetes can make decisions about load balancing for an application of any size.
Continuous integration and deployment are also easy with Kubernetes. One of the big advantages of a container-based application is zero downtime when pushing a new version to production. Kubernetes makes it even easier to release new versions and features.
Benefits of Kubernetes
Kubernetes makes containerized applications incredibly easy to deploy, scale, and manage. The initial challenges of setup pay off down the line as Kubernetes handles all the intricacies of processing, storage, and bandwidth for your application from many containers across many machines in a cluster. For anyone working on highly available web applications, especially from an optimization or DevOps perspective, knowledge of Kubernetes is key.
About Intertech
Founded in 1991, Intertech delivers technology training and software development consulting to Fortune 500, Government and Leading Technology institutions. Learn more about us. Whether you are a developer interested in working for a company that invests in its employees or a company looking to partner with a team of technology leaders who provide solutions, mentor staff and add true business value, we’d like to meet you.
Originally published at Intertech Blog.