Talk containers with an IT pro for more than a minute and the conversation will inevitably turn to container management and orchestration.
It might be easy to deploy a container, but operationalizing containers at scale — especially in concert with microservices and multiple cloud providers — is not for weekend enthusiasts. It requires planning, and most experts say an orchestration tool is a must.
[ Kubernetes terminology, demystified: Get our Kubernetes glossary cheat sheet for IT and business leaders. ]
That’s the point at which the conversation will likely turn to Kubernetes. The platform was first developed by a team at Google, and later donated to the Cloud Native Computing Foundation (CNCF). It’s not the only option for container management, but it has rapidly become one of the most popular. As Opensource.com notes, "Today, Kubernetes is a true open source community, with engineers' works from Google, Red Hat, and many other companies actively contributing to the project."
In fact, it’s one of the highest-velocity projects in open source history. That means you’ll be having even more conversations about Kubernetes going forward, with not only IT pros, but also non-technical folks with a stake in the company’s software. Which is to say: Pretty much everyone.
The most recent version of Kubernetes, 1.19, was released in August 2020. (This frequently-updated project has had releases on about a quarterly basis recently.)
[ Get the free eBook: O'Reilly: Kubernetes Operators: Automating the Container Orchestration Platform. ]
What does Kubernetes mean?
How do you explain Kubernetes and orchestration in plain terms that people can at least begin to understand? And where did this unusual name come from? The agreed-upon origin is from the Greek, meaning “helmsman” or “sailing master.”
Here’s how Red Hat technology evangelist Gordon Haff explains Kubernetes in his book, “From Pots and Vats to Programs and Apps,” co-authored with Red Hat cloud strategist William Henry:
“Kubernetes, or k8s, is an open source platform that automates Linux container operations. It eliminates many of the manual processes involved in deploying and scaling containerized applications,” Haff and Henry write. “In other words, you can cluster together groups of hosts running Linux containers, and Kubernetes helps you easily and efficiently manage those clusters.”
Why are IT pros deploying more containers in the first place? Deployment speed, workload portability, and a good fit with the DevOps way of working, for starters. Containers can greatly simplify provisioning of resources to time-pressed developers. “Once organizations understand the benefits of containers and Kubernetes for DevOps, application development, and delivery, it opens up so many possibilities, from modernizing traditional applications, to hybrid- and multi-cloud implementations and the development of new, cloud-native applications with speed and agility,” says Ashesh Badani, SVP and general manager for cloud platforms at Red Hat.
[ Want to learn more about Kubernetes APIs and migrating to Kubernetes? Get hands-on tips and instructions: Migrating to Kubernetes. ]
Here's an analogy: You can think of a container orchestrator (like Kubernetes ) as you would a conductor for an orchestra, says Dave Egts, chief technologist, North America Public Sector, Red Hat. “In the same way a conductor would say how many trumpets are needed, which ones play first trumpet, and how loud each should play," Egts explains, "a container orchestrator would say how many web server front end containers are needed, what they serve, and how many resources are to be dedicated to each one."
[ Want a shareable primer on orchestration? Get the PDF: How to explain orchestration in plain English. ]
Kubernetes defined
We put a variety of experts to a similar task: Explain Kubernetes in plain terms that a wide audience can grasp in at least a basic sense. Here’s what they had to say. (We particularly liked the lunchbox analogy: Well done.)
Mike Kail, CTO and cofounder at CYBRIC: “Let’s say an application environment is your old-school lunchbox. The contents of the lunchbox were all assembled well before putting them into the lunchbox [but] there was no isolation between any of those contents. The Kubernetes system provides a lunchbox that allows for just-in-time expansion of the contents (scaling) and full isolation between every unique item in the lunchbox and the ability to remove any item without affecting any of the other contents (immutability).”
Gordon Haff, technology evangelist, Red Hat: “Whether we’re talking about a single computer or a datacenter full of them, if every software component just selfishly did its own thing there would be chaos. Linux and other operating systems provide the foundation for corralling all this activity. Container orchestration builds on Linux to provide an additional level of coordination that combines individual containers into a cohesive whole.”
Nick Young, principal engineer at Atlassian: “Kubernetes is an orchestration layer that allows users to more effectively run workloads using containers — from keeping long-running services ‘always on’ to more efficiently managing intensive shorter-term tasks like builds.”
Kimoon Kim, senior architect at Pepperdata: “Kubernetes is software that manages many server computers and runs a large number of programs across those computers. On Kubernetes, all programs run in containers so that they can be isolated from each other, and be easy to develop and deploy.”
Dan Kohn, executive director of the CNCF, in a podcast with Gordon Haff: “Containerization is this trend that’s taking over the world to allow people to run all kinds of different applications in a variety of different environments. When they do that, they need an orchestration solution in order to keep track of all of those containers and schedule them and orchestrate them. Kubernetes is an increasingly popular way to do that.”
Nic Grange, CTO at Retriever Communications: “Kubernetes is the new operating system for the cloud. It helps you run software in a modern cloud environment by leveraging Google’s extensive experience of running software at scale.” The decision to give Kubernetes to CNCF “has allowed it to be widely adopted by many companies. As a result, it has grown to become a powerful and flexible tool that can be run on a variety of cloud platforms and on-premises,” he says.
The bottom line: Kubernetes is a tool that helps IT make the potential of containers an operational reality.
[ What does Docker have to do with Kubernetes? Read also: What is Docker? ]
What Kubernetes does and why people use it
Kubernetes is an important piece of the cloud-native puzzle: But it’s important to understand that its broader ecosystem provides even more value to IT organizations. As Red Hat’s Haff notes, “The power of the open source cloud-native ecosystem comes only in part from individual projects such as Kubernetes. It derives, perhaps even more, from the breadth of complementary projects that come together to create a true cloud-native platform.”
This includes service meshes like Istio, monitoring tools like Prometheus, command-line tools like Podman, distributed tracing from the likes of Jaeger and Kiali, enterprise registries like Quay, and inspection utilities like Skopeo, says Haff. And, of course, Linux, which is the foundation for the containers orchestrated by Kubernetes.
Choosing from and integrating a variety of tools yourself takes time of course, which is one place where enterprise open source platforms, such as Red Hat OpenShift come into play.
[ Read also: OpenShift and Kubernetes: What’s the difference? ]
Kubernetes eases the burden of configuring, deploying, managing, and monitoring even the largest-scale containerized applications. It also helps IT pros manage container lifecycles and related application lifecycles, and issues including high availability and load balancing.
In many organizations, the first step toward Kubernetes adoption to date might be best described as Oh, we can use Kubernetes for this! That means, for example, that a team running a growing number of containers in production might quickly see the need for orchestration to manage it all.
Josh Komoroske, senior DevOps engineer at StackRox, expects another adoption trend to grow in the near future: We can build this for Kubernetes! It’s the software equivalent of a cart-and-horse situation: Instead of having an after-the-fact revelation that Kubernetes would be a good fit for managing a particular service, more organizations will develop software specifically with Kubernetes in mind. Some people will call this "Kubernetes-native" software.
Kubernetes terms defined: Operators, secrets, kubectl, minikube, and more
Once you start learning about Kubernetes, it helps to understand the key terms. Let's dig in:What is a Kubernetes operator?
The conventional wisdom of Kubernetes’ earlier days was that it was very good at managing stateless apps. But for stateful applications such as databases, it wasn’t such an open-and-shut case: These apps required more hand-holding, says Jeremy Thompson, CTO at Solodev.
“Adding or removing instances may require preparation and/or post-provisioning steps – for instance, changes to its internal configuration, communication with a clustering mechanism, interaction with external systems like DNS, and so forth,” Thompson explains. “Historically, this often required manual intervention, increasing the DevOps burden and increasing the likelihood of error. Perhaps most importantly, it obviates one of Kubernetes’ main selling points: automation.”
That’s a big problem. Fortunately, the solution emerged back in 2016, when coreOS introduced Operators to extend Kubernetes’ capabilities to stateful applications. (Red Hat acquired coreOS in January 2018, expanding the capabilities of the OpenShift container platform.)
Operators became even more powerful with the launch of the Operator Framework for building and managing Kubernetes native applications (Operators by another name) in March 2018.
“Operators are clients of the Kubernetes API that control custom resources,” says Matthew Dresden, director of DevOps at Nexient. “This capability enables automation of tasks like deployments, backups, and upgrades by watching events without editing Kubernetes code.”
As Red Hat product manager Rob Szumski notes in a blog, “The key attribute of an Operator is the active, ongoing management of the application, including failover, backups, upgrades, and autoscaling, just like a cloud service. Of course, if your app doesn’t store stateful data, a backup might not be applicable to you, but log processing or alerting might be important. The important user experience that the Operator model aims for is getting that cloud-like, self-managing experience with knowledge baked in from the experts.”
If you can’t fully automate, you’re undermining the potential of containers and other cloud-native technologies.
Want to find or share operators? Meet OperatorHub.io
There’s been a noticeable bump in the interest in and implementation of Operators of late, according to Liz Rice, VP of open source engineering at Aqua Security. Rice also chairs the Cloud Native Computing Foundation’s technical oversight committee.
“At the CNCF, we’re seeing interest in projects related to managing and discovering Kubernetes Operators, as well as observing an explosion in the number of Operators being implemented,” Rice says. “Project maintainers and vendors are building Operators to make it easier for people to use their projects or products within a Kubernetes deployment.”
This growing menu of Operators means there’s a need for a, well, menu. “This proliferation of Operators has created a gap for directories or discovery mechanisms to help people find and easily install what’s available,” Rice says.
The relatively new OperatorHub.io is one place where Kubernetes community members can find existing Operators or share their own. (Red Hat launched Operator Hub in conjunction with Amazon, Microsoft, and Google.)
[ Related read: What is an Ansible Operator? ]
What is a Kubernetes secret?
A Kubernetes secret is a cleverly named Kubernetes object that is one of the container orchestration platform’s built-in security capabilities. A “secret” in Kubernetes is a means of storing sensitive information, like an OAuth token or SSH key, so that it’s accessible when necessary to pods in your cluster but protected from unnecessary visibility that could create security risks.
As the Kubernetes documentation notes, “Putting this information in a Secret is safer and more flexible than putting it verbatim in a Pod definition or in a container image.”
Secrets could be thought of as a relative of the least privilege principle, except instead of focusing on limiting the access of individual users to that which they actually do to get their work done, they focus on giving your applications the data they need to properly function without giving them (and the people who manage them) unfettered access to that data.
Put another way, Secrets help fulfill a technical requirement while solving a problem that rises out of that requirement: Your containerized applications need certain data or credentials to run properly, but how you store that data and make it available is the kind of thing that keeps security analysts up at night.
What is a Kubernetes cluster?
You can begin to understand this major piece literally: A cluster is a group or bunch of nodes that run your containerized applications. You manage the cluster and everything it includes – in other words, you manage your application(s) – with Kubernetes.
What is a Kubernetes pod?
This is essentially the smallest deployable unit of the Kubernetes ecosystem; more accurately, it’s the smallest object. A pod specifically represents a group of one or more containers running together on your cluster.
What is a Kubernetes node?
Nodes are comprised of physical or virtual machines on your cluster; these “worker” machines have everything necessary to run your application containers, including the container runtime and other critical services. (The Kubernetes Github repository has a good, detailed breakdown of the Kubernetes node.)
What is kubectl?
Simply put, kubectl is a command line interface (CLI) for managing operations on your Kubernetes clusters. It does so by communicating with the Kubernetes API. (It’s not a typo - the official Kubernetes style is to lowercase the k in kubectl.) It follows a standard syntax for running commands: kubectl [command] [TYPE] [NAME] [flags]. You can find an in-depth explanation of kubectl here, as well as examples of common operations, but here’s a basic example of an operation: “run.” This command runs a particular container image on your cluster.
What is a Kubernetes service?
A Kubernetes service is "an abstract way to expose an application running on a set of pods as a network service," as the Kubernetes documentation puts it. "Kubernetes gives pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them."
But pods sometimes have a short lifespan. As pods come and go, services help the other pods "find out and keep track of which IP address to connect to."
What is Minikube?
Minikube is an open source tool that enables you to run Kubernetes on your laptop or other local machine. It can work with Linux, Mac, and Windows operating systems. It runs a single-node cluster inside a virtual machine on your local machine.
In other words, Minikube takes the vast cloud-scale of Kubernetes and shrinks it down so that it fits on even your laptop. Don’t mistake that for a lack of power or functionality, though: You can do plenty with Minikube. And while developers, DevOps engineers, and the like might be the most likely to run it on a regular basis, IT leaders and the C-suite can use it, too. That’s part of the beauty.
“With just a few installation commands, anyone can have a fully functioning Kubernetes cluster, ready for learning or supporting development efforts,” says Chris Ciborowski, CEO and cofounder at Nebulaworks.
The official Kubernetes documentation includes instructions for installing Minikube – note that you’ll also need to install kubectl, the native command-line interface for Kubernetes. It also offers a quickstart guide for getting up and running.
Pro tip if you’re using a RHEL/Fedora/CentOS workstation: Over at Opensource.com, Bryant Son wrote a great guide on getting started with Minikube tailored specifically for you.
Here are four ways you can use Minikube:
- Fast route to experimentation and learning - for developers and IT leaders
- Evaluate important Kubernetes features
- Play with Kubernetes’ extensibility in a sandbox
- Do a Kubernetes proof of concept project
For more, see Minikube: 5 ways IT teams can use it.
Kubernetes resources: Learn more
Want to learn more about Kubernetes trends, best practices, and security considerations? Move on to our deep dive for IT leaders: Kubernetes: Everything you need to know.
Also check out these eBooks, primers, and tutorials for even more learning on Kubernetes, and share with your team:
eBook: Getting Started with Kubernetes
eBook: O'Reilly: Kubernetes Operators: Automating the Container Orchestration Platform
eBook: O'Reilly: Kubernetes patterns for designing cloud-native apps
Kubernetes glossary cheat sheet: 10 key concepts in plain English
Containers primer: Learn the lingo of Linux containers
Tutorial: Kubernetes by Example – from Red Hat OpenShift. Step-by-step walk-throughs of Kubernetes concepts and capabilities.
Free class: Deploying Containerized Applications Technical Overview - Red Hat
Editor's note: This article was originally published in October, 2017 and has been updated.
[ Learn the non-negotiable skills, technologies, and processes CIOs are leaning on to build resilience and agility in this HBR Analytic Services report: Pillars of resilient digital transformation: How CIOs are driving organizational agility. ]
Comments
It is a pleasure to see such a well written article that explains the epoch change IT services are going through right now.