Today’s serverless platforms have an enormous limitation restricting them from being adopted by companies and conglomerates, which have strong interoperability and portability constraints. The serverless ecosystem is developing by exploring standard and open packaging, runtimes, and event formats. The advances in these areas are surely diminishing the lines between a cloud-native and serverless workload system, and are already pushing serverless offerings towards open, portable, and interoperable frameworks for implementation and testing.
There are numerous exciting elements of the Kubernetes architecture: the containers providing common packaging, runtime and resource isolation model within its foundation; the simple control loop mechanism that monitors the actual state of components and reconciles this with the desired state; the custom resource definitions. But the true enabler for extending Kubernetes to support diverse workloads and increasing changes is the concept of the pod. A pod provides two sets of guarantees. The deployment guarantee ensures that the containers of a pod are always placed on the same node. This behaviour has many reliable features such as allowing containers to communicate synchronously or asynchronously over localhost, over inter-process communication, or using the local file system.
The extensible control-loop mechanism, combined with the generic pod characteristics, enables Kubernetes to handle diverse workloads, including serverless. Let us dive into the diverse workloads and learn the suitable use cases for each are.
To discuss the traits of serverless, I will use the definition from the serverless working group of the Cloud Native Computing Foundation (CNCF), as it is one of the most widely agreed definitions across many different software vendors:
Serverless computing refers to the concept of building and running applications that don’t require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and executed, scaled, billed in response to the exact demand needed.
If we look at this definition as developers who write code that will benefit from a serverless platform, we could summarize serverless as an architecture that enables “running finer-grained functions on-demand without server management”. Serverless is usually thought from the developer’s perspective but there is also another, less discussed angle. Every serverless platform has providers who cater to the management of the platform and servers: they have to be the links and manage coarse-grained compute units and their platform incurs costs 24×7 regardless of demand.
The definition refers only to “serverless computing”. But architecture is composed of computing and data combined. A more complete definition of the serverless architecture is where both serverless compute and serverless data go hand in hand. Typically, such applications will incorporate cloud-hosted services to manage state and generic server-side logic, such as authentication, API gateways, monitoring, alerting, logging, etc. We typically refer to these hosted services as “backend as a service” (BaaS) — think of services such as SQS, SNS, API Gateway, DynamoDB, CloudWatch, etc. In hindsight, the term “serviceful” rather than “serverless” could have been a more accurate description of the resulting architecture. But not everything can find a replacement with third-party services; if this was the case and there was a service for your business logic, you wouldn’t be in business! For this reason, a serverless architecture usually also has “functions as a service” (FaaS) elements that allow the execution of a custom stateless compute triggered by events. The most popular example here is AWS Lambda.
A complete serverless architecture is composed of BaaS and FaaS, without the notion of servers from the consumer/developer point of view. No servers to manage or provision also means consumption-based pricing (not provisioned capacity), built-in autoscaling (up to a limit), built-in availability and fault tolerance, built-in patching and security hardening (with built-in support policy constraints), monitoring and logging (as additional paid services), etc. All of that is consumed by the serverless developers, and provided by serverless providers. Serverless computing frameworks will inevitably experience the same hype cycle that has afflicted every other emerging technology. Irrational exuberance will be quickly followed by disillusionment.
We provide automatic security patches, proactive monitoring, upgrades etc. which enables you to run production-grade Kubernetes from anywhere. For any query write to us email@example.com