Kubernetes is like pizza. There is a virtually endless variety of ways to configure it, and the approach you take determines whether you end up with an optimal hosting environment or the software equivalent of kiwi-and-mozzarella pie.
I'm not here to tell you what to put on your pizza. What I would like to do, however, is explain five of the most important—and most easily overlooked—best practices for configuring a Kubernetes environment.
Before diving in, let me emphasize that best practices for Kubernetes deployment will vary a bit depending on the specific requirements of a given set of workloads. That said, the strategies described below will help you build Kubernetes clusters that are as secure and manageable as possible from the start for a majority of workloads. In general, these are the practices most teams should be following when deploying Kubernetes.
1. Use an Integrated Secrets Vault
A digital-secrets vault is a special application or service that stores secrets—meaning passwords, access keys, and other sensitive information required to access restricted resources. The secrets are stored securely in the vault and exposed on an as-needed basis to applications and services.
A secrets vault isn't the only way to manage secrets in Kubernetes. You can also define secrets information directly within Kubernetes deployment configurations or (worse, because it's even less secure) within applications themselves. But this is a poor practice because it means that anyone who is able to view your deployment files will see highly sensitive data. A second issue—beyond security considerations—is that secrets defined directly within Kubernetes aren’t easily accessible by non-Kubernetes workloads.
To mitigate these challenges, make sure your Kubernetes environment is integrated with a secrets vault at deployment time. Secrets vaults provide the most secure way of managing sensitive information. They also make it easy to share that information on an as-needed basis between multiple resources—including but not limited to workloads running in Kubernetes.
The public-cloud providers all offer secrets-management services (such as AWS Secrets Manager) that integrate with their hosted Kubernetes services (such as EKS). Or you can opt for a third-party solution, such as Conjur or Vault, that integrates relatively easily with any Kubernetes distribution.
2. Define Access Controls Using IAM
If you run Kubernetes using a managed-cloud service, you can take advantage of the Identity and Access Management (IAM) tooling of your cloud to define policies that control which external cloud resources your Kubernetes workloads can access.
Using cloud IAM for this purpose is beneficial for two reasons—both involving reduced complexity. First, it is easier to write and manage centralized IAM policies than it is to individually configure different access controls for each Kubernetes workload. Second, centrally defining IAM policies minimizes the risk of access-control oversights that malicious actors could exploit to steal sensitive data.
3. Keep Configuration Data Inside Kubernetes Deployments
Kubernetes workloads are defined using deployment files; these files tell Kubernetes how to deploy each container (or set of containers) that you want to run in order to deploy an application. Most variables that impact application behavior—such as how many resources the application should have and which networking settings it needs—can be defined within the deployment file.
You could also choose to define these variables in other places, such as within the container images you build for your application. However, centralizing configuration data within Kubernetes deployments is a smarter choice because it allows you to keep your configurations consistent. In turn, you get predictable application performance and a lower risk that configuration errors or oversights will lead to undesirable application behavior.
4. Configure Integrated Logging
A single Kubernetes cluster could host dozens of applications—using hundreds or thousands of individual containers. Attempting to collect logs from each container individually is a nightmare. Log files disappear permanently when containers shut down, which means that you need to collect logs in real time in order to avoid the risk of losing log data. In addition, the more log data you move around manually, the higher the risk that logs containing sensitive information—such as access keys—will fall into malicious hands.
That's why you should configure an integrated logging solution when deploying Kubernetes. You can use your cloud provider's logging service for this purpose, or you can set up an external tool. Either way, your goal should be to ensure that all log data from all containers within your cluster is automatically aggregated and made available for analysis with as little exposure to security risks as possible.
5. Define Resource Minimums—but Not Maximums
Kubernetes provides the option—although it's not a requirement—of defining resource minimums for workloads. A resource minimum is the minimum set of CPU and/or memory resources that a given workload should have available when Kubernetes deploys it.
It's a best practice to define resource minimums for every workload because doing so ensures that your workloads are never starved of resources. If sufficient resources aren't available, Kubernetes will simply not deploy the workload—which isn't ideal, but is better than having the workload deploy and fail later due to lack of sufficient resources.
As long as you're aware of deployments that don't deploy because the minimum resources aren't available, you can scale up your cluster size or resource allocations to accommodate the requirements—or, even better, you can set up autoscaling ahead of time so that your cluster automatically scales up to meet workload resource requirements.
In case you're wondering whether you should also specify resource maximums, the answer is generally no. Kubernetes does allow you to define the maximum resources that each workload is allowed to consume; these maximums can theoretically prevent one workload from gobbling up so many resources that other workloads don't have the necessary resources to run. As long as you've set healthy minimums for each workload, however, you shouldn't have to worry about this "noisy neighbor" problem. When every workload is properly provisioned at the time of deployment, the resource consumption of other workloads won't cause it to fail.
Conclusion
All of this said, every workload is unique. Kubernetes deployment best practices will vary depending on exactly what you're running.
In general, however, Kubernetes environments that take advantage from the start of secrets vaults, IAM frameworks, centralized configuration management, integrated logging tools, and minimum resource settings will be more secure and easier to manage than those in which these settings are an afterthought. You can make your life as a Kubernetes admin easier—and the workloads of your users more secure—by factoring these practices into your Kubernetes deployment strategy.
Keep learning
Choose the right ESM tool for your needs. Get up to speed with the our Buyer's Guide to Enterprise Service Management Tools
What will the next generation of enterprise service management tools look like? TechBeacon's Guide to Optimizing Enterprise Service Management offers the insights.
Discover more about IT Operations Monitoring with TechBeacon's Guide.
What's the best way to get your robotic process automation project off the ground? Find out how to choose the right tools—and the right project.
Ready to advance up the IT career ladder? TechBeacon's Careers Topic Center provides expert advice you need to prepare for your next move.