Managing Kubernetes resources: 5 things to remember

Kubernetes automates much of the work of managing containers at scale. But containerized applications commonly share pooled resources, so you need to allocate and manage them properly
365 readers like this.
Ship's steering wheel to represent Kubernetes

Container orchestration offers a powerful promise to IT teams: It automatically takes care of a lot of the work required to manage containers at scale.

Consider the self-described power of  Kubernetes from the project’s official site: “Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.”

That’s big – especially when you’re talking about teams running many containerized apps. While the automation intrinsic to orchestrators like Kubernetes is widely seen as a must for running containers in production, this doesn’t mean IT pros just get to prop their feet up and relax. There’s still plenty to be done, including setting and optimizing how you manage resources for the applications running on your clusters.

[ Kubernetes 101: An introduction to containers, Kubernetes, and OpenShift: Watch the on-demand Kubernetes 101 webinar. ]

Why you need to manage Kubernetes resources

“Kubernetes provides options to manage all key resources – [such as] compute, memory, and storage – for pods that are scheduled to run on it,” says Raghu Kishore Vempati, director of technology, research, and innovation at Altran. “Clusters don’t have indefinite resources, and in general, it wouldn’t be a common scenario that each application/solution has its own Kubernetes cluster unless they are very special and are supposed to run that way.”

[ New to Kubernetes? Read Kubernetes architecture for beginners. ]

That means applications are commonly sharing pooled resources. The “right” way to allocate these resources will vary across organizations and applications, but the good news is that Kubernetes includes a lot of features for resource management.

“Kubernetes can be deployed anywhere, but this means that the infrastructure required will be unique to your environment,” says Tom Manville, engineering lead at Kasten. “It is important to understand the resources required in your environment and allocate them appropriately. As your cluster scales, so must your infrastructure. Thankfully, it is easy to expand the size of the cluster, and in many cases, this can happen automatically.”

Managing Kubernetes resources: 5 key things to know

Vempati notes that managing resources such as compute or storage in Kubernetes environments can broadly be broken into two categories: what Kubernetes provides at a system level, and what needs to be planned for at an application and architecture level. This post will primarily focus on the features that fall into the former category.

Let’s dig into five important things to know.

1. Use namespaces and resource quotas

As Vempati notes, teams commonly run multiple applications on the same cluster. With that type of use, Vempati and other experts typically recommend using namespaces as a best practice for isolation and other purposes in multi-tenant environments. Similarly, namespaces are also recommended when there are multiple teams or many users accessing the same cluster (which sometimes goes hand-in-hand with running multiple applications). Namespaces are also part of Kubernetes security.

When you use namespaces in these multi-application, multi-user environments, Vempati also advises using Kubernetes’ native resource quotas feature to ensure things are properly allocated among the applications and teams assigned to those namespaces.

"Resource quotas allow cluster admins to control overall resource consumption per namespace,” Vempati says. “This can include the total number of objects that can be created per namespace – compute, memory, and storage. For example, we can set CPU limits or memory limits across all pods in a non-terminal state, not to exceed a certain value.”

Manville from Kasten notes that CPU and memory are usually the most common resources that administrators constrain with resource quotas, but you can also limit the number of pods in a namespaces (and we’ll get back to storage in a moment). Suffice it to say, resource quotas are a critical management tool.

“Quota-based management allows the cluster admins to effectively manage the overall resources, distributing them to all applications in the most appropriate way,” Vempati says.

[ Read also: OpenShift and Kubernetes: What’s the difference? ]

2. Use limit ranges

Resource quotas are for managing resource consumption at the namespace level. In other words, they apply to the entire namespace once set. Limit ranges serve a similar purpose but constrain resource consumption at the individual pod or container level.

“Kubernetes provides powerful primitives for administrators to control the resources consumed by applications,” Manville says. “Administrators can configure limit ranges to place bounds on the developer’s resource requests and create resource quotas to limit the total number of resources consumed in a namespace.”

Another way of thinking about the difference between these related capabilities: Namespaces and resource quotas can be used to ensure an application or team doesn’t hog more than its necessary share of the cluster’s overall pooled resources. A limit range can help prevent scenarios where a single pod or container eats up the lion’s share of those resources allocated to a particular namespace.

3. Set network policies

Kubernetes also has native features around networking, including how pods communicate (or are prevented from communicating) with one another. Vempati advises making use of Network Policies.

“For networking, Kubernetes allows an option to set Network Policies that can help specify how the various pods scheduled on the cluster can communicate with each other and other endpoints on the same cluster,” Vempati says. “For example, we could set rules for ingress and egress traffic for the pods.”

This is also part of a holistic, layered approach to container security.

4. Don't forget about storage when applicable

Storage deserves its own deeper dive. Just as an administrator can have very specific control over CPU and memory, the same principle applies to storage.

“Kubernetes helps set limits at a very granular level,” Vempati says. “So, in a given namespace, you can set a limit to the number of PersistentVolumeClaims that can exist in the namespace.”

Gordon Haff, technology evangelist at Red Hat, describes Persistent Volumes (PVs) as one of the most important Kubernetes resources to effectively manage.

“The ability to configure storage that isn’t tied to ephemeral container images was one of the key Kubernetes innovations that differentiated it from some earlier container-based platform-as-a-service (PaaS) offerings,” Haff says. “Without persistent storage, your application architectures are limited to specific styles, such as twelve-factor apps that aren’t suitable for many enterprise workloads.”

Haff notes that PVs in Kubernetes are implemented as plugins that have a lifecycle independent of any individual Pod that uses the PV. A PersistentVolumeClaim is essentially a request for storage by a user and, Haff adds, it further abstracts the details of the PV.

“Kubernetes is designed this way because it is common for users to need PVs with varying properties, such as performance, for addressing different problems and workloads,” Haff explains. “Different types of PVs also have different types of access modes available.”

So this bleeds into that second category of resource management that Vempati described above: Resource planning at the application and architecture level. Haff points out that storage is something you need to evaluate at the outset.

“An important part of managing Kubernetes resources is planning up front whether you need persistent storage in the first place, such as for a database, and if you do, which volume plugin is most appropriate for your workload and your compute environment,” Haff says.

Speaking of databases, stateful applications introduce some additional nuances to resource management, says Jonathan Katz, VP of platform engineering at Crunchy Data. “For example, memory is an important tunable resource for PostgreSQL,” Katz says. Crunchy Data is behind the Crunchy PostreqSQL for Kubernetes operator.

[ Related read: How to explain Kubernetes Operators in plain English. ]

“It’s important to set your memory request to the actual value of memory that you want to use over time, as opposed to setting a lower request and a higher limit,” Katz explains. “This will ensure that your PostgreSQL instance is scheduled to a node that has ample resources as more data is cached in memory and limits the risk of events such as eviction or a visit by the OOM killer."

5. Keep things tidy: API objects and monitoring

Manville from Kasten recommends that you make sure you’re doing your housekeeping – better yet, automate it. Otherwise, you might see performance and/or resource utilization degrade over time. The latter, in particular, might mean wasted cloud spending.

“If any of your applications use Kubernetes APIs directly, be sure to automate the cleanup of API objects,” Manville says. “A large number of unused API objects can slow down the cluster’s performance. Worse, these objects can sometimes map to idle infrastructure, which may add an unnecessary cost.”

In general, effective resource management requires that teams and administrators can see what’s going on. So don’t forget monitoring and make sure you’re keeping tabs on how resources are being used across your cluster(s).

“This will help you better understand the resources being used by your cluster, alert you when a cluster is close to capacity, and in some cases, open the door to let Kubernetes automatically scale up or down,” Manville says.

[ Read also: 5 open source projects that make Kubernetes even better. Get the eBook O’Reilly: Kubernetes Operators: Automating the Container Orchestration Platform. ]

Kevin Casey writes about technology and business for a variety of publications. He won an Azbee Award, given by the American Society of Business Publication Editors, for his InformationWeek.com story, "Are You Too Old For IT?" He's a former community choice honoree in the Small Business Influencer Awards.