Kubernetes is easily the most transformational cloud technology available today. It is the de facto standard for container orchestration, essentially functioning as an operating system for cloud-native applications.
With its built-in high availability; granular, infinite scalability; portability; and rolling upgrades, Kubernetes provides many of the features that are critical for running cloud-native applications on a truly composable, interoperable infrastructure.
Enterprises of all sizes are trying to take advantage of Kubernetes both for developing greenfield applications and for re-architecting and modernizing legacy applications so that they're Kubernetes-based.
But there is a process involved with doing so. Cloud-native applications need to be architected, designed, developed, packaged, delivered, and managed based on a deep understanding of cloud-computing frameworks. The application itself needs to be designed for scalability, resiliency, and incremental, agile enhancements from the get-go.
Cloud-native applications fundamentally change how you provision and deploy your infrastructure, and how you manage it. This is particularly true when making the jump from traditional virtual machines to containers and Kubernetes, and when you have more loosely coupled applications, such as microservices and serverless functions.
Here are six best practices and six anti-patterns to consider when your organization is investigating how to modernize its applications and invest in container-based development and infrastructure.
Containerization best practices
1. Start by targeting the right kind of application
Your first step should be to select a candidate for containerization. You need an app that is backed by a clear business need for modernization. Beyond that, good indicators of a good fit for containerization include:
- The app is large or web-scale.
- It has inherent statelessness in the architecture.
- Its business needs include a superior user experience and high frequency of releases and updates.
2. Consider team and human capabilities
Invest in DevOps-style team capabilities for building out an agile development model and invest in site reliability engineers (SREs) for performing deployments in production and to ensure smooth operations. Not taking care of these is a sure recipe for failure.
3. Adopt 'pure' open-source Kubernetes
Once you have determined the use case and the team capabilities, no matter which cloud you choose to deploy for production-grade containerized workloads, you stick with a stable open-source release. Best of all, go with the industry standard for container orchestration: Kubernetes.
Every cloud provider that offers its own Kubernetes distribution or some other orchestration system with custom integration points and features that are unique to its ecosystem, is bound to get you stuck in the long run. Use the plain-vanilla, pure Kubernetes to stay cloud-neutral and to ensure that you can easily benefit from the great developments and enhancements produced by this vibrant, mature community.
4. Work out the enterprise integrations
These include single sign-on (SSO), authentication and authorization, monitoring, security, ACL governance risk compliance, and more.
5. Stay conversant with Kubernetes changes
Features keep getting added and updated from one release to another in the Kubernetes project, which produces a high velocity of change. That's why it's important to upgrade to stable releases and work with a vendor that can help you stay current with your Kubernetes infrastructure, with the least amount of disruption to your service or management overhead.
You may not have the internal resources to ensure that Kubernetes installations, troubleshooting, deployment management, upgrades, monitoring, and ongoing operations won't end up causing significant business disruption and cost increases—especially the need for more skilled personnel.
If that's the case, consider opting for a SaaS-based managed Kubernetes service. This is particularly helpful if you are struggling to keep up with the rate of change in the community and best practices for managing Kubernetes in production.
6. Incrementally iterate
As you get to a successful point with the first deployment, slowly percolate this approach to other lines of business.
Have patience
Developing expertise for creating and running Kubernetes- or container-based applications is a journey. It doesn't happen in a day, and you need to be in it for the long haul. You will become more versed with this new technology and this new way of developing and running cloud-native apps.
The above tips are sure to get you on the right track on your path toward containerization. Now here are their anti-patterns.
Containerization don'ts
These anti-patterns could hamper your Kubernetes container deployment project.
1. Choosing the wrong kind of application
If your application is not an ideal candidate for containerization, you will waste time on countless release cycles and valuable resources trying to migrate something that doesn't make sense — and you will fail.
2. Neglecting enterprise integrations
Enterprises that neglect the current vendor landscape and the integrations to Kubernetes from a network, storage, and security standpoint are setting themselves up for failure.
3. Neglecting to build a customized continuous delivery/continuous integration pipeline architecture
If you don't do this, you'll end up with operations ramifications that you will have to work out.
4. Locking yourself into an IaaS provider’s Kubernetes or serverless computing services
Since workloads run on different providers or clouds based on business and cost considerations, by relying on pure-play Kubernetes as the standard, you ensure their portability and interoperability across multiple clouds and even in on-premises environments.
IaaS vendor lock-in makes zero sense from both a business and technology perspective.
5. Not dealing with multi-cloud management
This is a challenge your IT admins need to deal with once you have containerized deployments in place. IT leadership needs to account for the economics of multi-cloud management and the ROI in the business containerization case.
6. Not making investments from both a developer and operations standpoint
Containers change the entire software lifecycle and how the related roles operate. You need to understand that to be ready. Train developers and operations on everything from container registry, configuration and continuous integration practices to security and monitoring containers and microservices effectively.
Containers and Kubernetes are your future
Since Kubernetes is still in a stage of relative enterprise infancy, now is the time to consider the above issues as you evaluate your path to containers. To take advantage of Kubernetes in your organization, it is important to ensure that your implementation is smooth, and that you can reap the benefits of this revolution in software delivery.
Keep learning
Choose the right ESM tool for your needs. Get up to speed with the our Buyer's Guide to Enterprise Service Management Tools
What will the next generation of enterprise service management tools look like? TechBeacon's Guide to Optimizing Enterprise Service Management offers the insights.
Discover more about IT Operations Monitoring with TechBeacon's Guide.
What's the best way to get your robotic process automation project off the ground? Find out how to choose the right tools—and the right project.
Ready to advance up the IT career ladder? TechBeacon's Careers Topic Center provides expert advice you need to prepare for your next move.