How to Modernize Storage for Cloud Native Production Apps on Kubernetes

April 21, 2022

Every modern enterprise is on its way to the cloud, where cloud-native apps benefit from computing elasticity and administration efficiencies that monolithic systems based on traditional architectures cannot match. Modern cloud methods erase old application workflow limits and reliance on proprietary or specialized computing, storage, and networking hardware, resulting in a huge reduction in the total cost of ownership while expediting core IT activities on a flawlessly scalable platform.

The cloud-native value proposition is inextricably linked to software-defined infrastructure. Huge increases in agility and cost-efficiency may be realized by using commodity servers and open-source orchestration systems. The software-defined method enables the most rapid, flexible, and cost-effective deployment and configuration of computational resources in cloud architecture.

The transition to cloud-native environments is inextricably linked to the transition to container-based settings, where Kubernetes has essentially emerged as the industry standard for flexibly managing containers at scale. Modern web-native applications such as MySQL, Apache Kafka, and others are frequently deployed as pods, providing finely tailored microservices that can be scaled up or down dynamically and with high availability based on workload.

Persistent Storage For Stateful Applications

Data generated by pods are ephemeral by definition; if a worker node dies, Kubernetes can restart the pod somewhere to continue operations. When Kubernetes was first deployed for stateless applications, the storage layer was also thought to be ephemeral. Applications did not require any data to be initialized and could be shut down properly or suddenly without difficulty. It didn’t matter if the underlying storage was merely temporary; the burden was simply reallocated to a new pod.

In today’s world, many systems managed by microservices and containers are no longer considered stateless. A server failure would be troublesome in the absence of persistent storage. A database must resume activities on a different pod without interruption, resuming up where it left off. Persistent storage is required.

This is where the underlying cloud principle of “software-defined everything” is put to the test, necessitating careful thought for those devoted to realizing its full potential.

When you use containers, you divide an application into several smaller sections. According to Amdahl’s law, the slowest component of any system works as a chokepoint for the entire system; you don’t want all of the other components waiting on just one. To guarantee constant application performance, you must have consistent storage response, and most recent Kubernetes apps use flash as the storage medium.

But where should it be used? Kubernetes supports persistent storage through locally attached flash drives, although it does not protect against drive failures. This technique also fundamentally violates the primary principle of portability, which states that software should not be tethered to certain hardware. Locally connected storage presents a dependence that contradicts the basic value proposition of software-defined cloud infrastructure’s dynamic resource provisioning.

This isn’t simply a philosophical distinction; it has real-world consequences for computing scalability and agility. The storage should constantly follow the application wherever it is serviced and provide the quickest response time available.

Solving The Storage Dilemma

Several storage solutions are available for deployment within Kubernetes, although storage performance decreases as a result of the solutions’ dependency on replication. These methods pose a “noisy neighbour” problem by operating storage within the Kubernetes framework alongside the application. Applications from neighbouring pods utilise storage resources from neighbouring pods, influencing CPU utilisation. Worker nodes with reserved resources are not always accessible for applications, necessitating increased resource planning and complexity.

For these and other reasons, it is advised to use a disaggregated cloud-native storage solution connected through a container storage interface (CSI) plugin. To install flash storage at scale in these settings, you may be forced to use expensive, proprietary flash storage arrays – again, going against the core principles of cloud-native, which state that everything must be software-defined for optimum efficiency.

The objective is to allow local flash drive-calibre storage performance within Kubernetes by leveraging a disaggregated, dedicated storage system that is software-defined, fault-tolerant and supports essential features such as thin provisioning, snapshots, and clones, among others.

This may appear to be a big task, but it is easily possible today, and it is ultimately where cloud-native is bringing us for storage layer design. This method will become mainstream in a few years as older architectures become untenable/uneconomical to maintain.

Seamlessly Scalable Flash Storage

Advances in the standard NVMe/TCP protocol are enabling local flash performance for Kubernetes container environments across the cloud by utilising a simple and efficient TCP/IP protocol on ethernet. Because of its simplicity and ubiquity, iSCSI has replaced fibre channel SAN as the dominant protocol for sharing block storage in such situations. NVMe/TCP, like iSCSI, is simple to implement and can utilize the same networking as iSCSI, but with shorter latency and larger IOPS. This standard protocol successfully harnesses and extends the performance profile of locally attached flash devices into the cloud storage layer.

Microservices can be supported everywhere they are deployed, with no tradeoffs in application portability. The issue is determining what degree of performance is required.

Today, Kubernetes is at home in TCP/IP network settings, as practically all web applications are TCP/IP based. The use of iSCSI with NVMe/TCP eliminates the need for specific protocols, adapters, or switch settings. Both make use of conventional ethernet adapters, switching components, and techniques to deliver a lower-cost option using the network interface card of your choice. In the ethernet connection hierarchy, NVMe/TCP supersedes iSCSI, although protocols may coexist on the same networks, so it’s a question of performance demands — with NVMe/TCP giving near local NVMe performance.

The key benefits of cloud-native apps operating on software-defined infrastructure employing commodity hardware with Kubernetes-enabled orchestration and unrestricted application mobility are maintained with this approach. This lays the groundwork for future ultra-efficient application management and scalability — and all the cost advantages that come with it.

Categories: Article

Comments are closed.