As organizations undergo IT transformations and shift to hybrid clouds, most new applications are developed with containers. These apps are now commonly used in production. According to a 2020 survey conducted by the Cloud Native Computing Foundation of 1,324 members of the cloud-native community (mostly developers), 92% said they use containers in production. Most (55%) said they use stateful applications in production, with another 12% evaluating stateful apps and 11% planning on using them over the next year.
This would be expected coming from the Cloud Native Foundation, but to add to this trending, Evaluator Group research shows over 60% of IT infrastructure clients in large enterprises are considering or using containers.
“Dev areas are pushing containers. Their long-term strategy [is for] IT to partner with Development and support containers. How do we manage that? We’ll need the whole gamut from provisioning to monitoring, etc.”
The proliferation of containers creates storage challenges. Containers are ephemeral, not persistent. They need persistent storage to be useful in production. Also, most of today’s storage systems were designed for VM-centric applications but containers virtualize the operating system instead of the hardware stack as VMs do.
Container-Native Storage (CNS), also known as cloud-native storage, is a way to deliver persistent storage for containers. It is not the only way to support persistent storage, but CNS is the only type of storage designed from the ground up to run in containers.
To fully understand CNS products and how they work, we first must grasp the underlying technology behind containers and the Kubernetes container orchestration platform.
Containers are open source and do not quire a hypervisor. They do require an orchestration platform, however, to manage a large number of containers that can be created during app development. Open-source Kubernetes has become the industry standard for this. Kubernetes supports persistent storage and CNS runs on a Kubernetes cluster with an embedded driver or its own container storage interface (CSI). Kubernetes runs workloads by placing containers into pods that run on virtual or physical machines called nodes. Each Kubernetes pod shares storage and network resources and instructions for running the containers.
Kubernetes clusters include Worker Nodes for running workloads and a Control Plane made up of Master Nodes that manage the worker nodes. Worker Nodes run application software and the operating system. A kubelet is an agent running on each worker node that communicates with an API server in the control plane. The kubelet also provides storage and network connectivity.
Download now to read the free report!