With Growth of In-Production Kubernetes, Container Platform Providers Add New Capabilities

By Janae Stow Lee, Tuesday, September 27th 2022

Categories: Analyst Blogs

In-Production Kubernetes Use Expanding

Kubernetes-managed containers have escaped the confines of application development and pilot projects and are expanding rapidly in full production.  EG’s recent survey of container management revealed that Kubernetes is in production at over 50% of surveyed customers, with 60% of the customers running more than five workloads (applications), and 70% already running multiple clusters.   Within a year, 55% of customers expect to be running six or more clusters, driving higher requirements in global observability and management tools…. and challenges for IT Operations Executives.

Some customers are avoiding the IT Operations challenge by leveraging managed container services from providers like AWS, GCP, Azure and VMware.  But for customers seeking more flexibility, Container Management Platform providers are reacting to this race to productization with a range of new capabilities to support self-management of production scale out.  Recent announcements by Red Hat and D2IQ announcement offer some key examples.

Vendors Address the Scale Issue

To address rapidly scaling workloads, Red Hat (in Open Shift 4.11) has recently made autoscaling of application workloads easier and more flexible. Customers can now define custom metrics for scaling – leveraging an Open Shift automated operator to scale their application workloads without need for operator intervention.

D2IQ’s enhancements are focused on scaling out clusters.  DKP now includes what D2IQ calls “federated application management” for fleets of clusters.   Application lifecycle management – the ability to define the desired state of an application and then manage updates, revisions etc. over time – is a necessity for container management.  The ability to manage application updates and configuration across a multi-cluster environment (including large fleets) has become a must-have feature for any customer intending to scale-out.  DKP’s latest enhancement takes the capability to a new level, with the ability for the customer to define a desired state for a configuration of multiple applications running together.  DKP then performs the application lifecycle management as a single integrated activity, eliminating the need for the customer to define and manage the desired state of each application individually.  Customers using DKP 3.4 can define and execute application lifecycle management across multi-cluster (and multi-cloud) environments. This is a particularly valuable capability for customers deploying large numbers of clusters with similar container contents (e.g., an edge-based application), and/or where rolling updates are required to manage certain clusters as a group.

The company also announced a significant new capability in global observability.  As customer production environments scale (in both volume and complexity), administrators need automation to help maintain availability and performance.   DKP Insights, a predictive analytics tool that detects anomalies in Kubernetes clusters or workload configurations, has historically offered global observability, running an analytics engine on each Kubernetes cluster, and then accumulating information for reporting, alerting, and troubleshooting on the DKP management console.  But with this latest announcement, Insights also automatically checks workload configurations against preset definitions of best practices and suggests improvements to the administrator.  The admin can then drill down on the anomaly to get additional information, a root cause analysis, and suggested solutions.

D2IQ product executives noted that with this management and engine model, the “suggested improvements” could be enhanced to automatically enable deployment by the management system and application by the engines, in a continuous AIOps model.  However, as customers may not be ready to adopt an operating model which is fully automated, a logical interim step would be to write and package the YAML code necessary to deploy the suggestion, allowing for a quick review and approve “push button” deployment to alleviate DevOps toil.

Enterprise customers who see multi-cluster in their future will want to keep an eye on these efforts to see what develops.   Improvements in global observability and management – including automated operations – will be required to support the massive Kubernetes environments of the future.

Forgot your password? Reset it here.