Serverless came of age in 2014 with the release of AWS Lambda . Since then, there’s been increased interest in FaaS and serverless – however, it’s important not to confuse these technologies as interchangeable terms .
While there are many reasons for adopting serverless, a lesser spoken truth is why enterprises should run serverless on Kubernetes. Running serverless workloads outside of Kubernetes creates vendor lock-in, an inability to leverage existing data, and poor DevOps productivity. Martin Fowler’s research on serverless architectures  illustrates the benefits of serverless on Kubernetes.
This particular column will encompass the decrease in operational costs associated with going serverless in general, and then transition into why it’s advantageous for enterprises to run serverless on Kubernetes.
Operational cost reduction of all kinds
Serverless technology helps reduce costs along multiple axes, the most important of which are the cost of computing and development savings. Reducing the cost of computing is the most obvious, celebrated, and meaningful benefit of serverless.
Horizontal scaling is also cost effective because you offload server-side management to one of multiple cloud vendors. They then utilize economies of scale, making you responsible for only the computing costs that you use. This can be a blessing for newer services that have little load on them or those with erratic traffic patterns; an owned or managed server would sit idle for long times and still end up on your balance sheet.
When you’re free from server-side packaging and management, deploying applications becomes much simpler. You no longer have to worry about Puppet/Chef or determining your container story. That said, as serverless workloads become part of larger and more complex architectures, you should no longer approach serverless the same way. Instead, leverage Kubernetes to run your serverless workloads and stick to the computing benefits.
To better understand the need for enterprises running serverless functions on Kubernetes, it’s important that we understand the advantages of doing so.
Avoiding vendor lock-in
While an isolated serverless application can run on a single-cloud provider, large organizations often use hybrid and multi-cloud strategies. This is so they can leverage the best features from each cloud while also mixing in some owned and managed servers. Depending on the exact environment, running serverless workloads on Kubernetes makes it possible to use whatever combination you want.
Leveraging existing services and data
If you’re a larger institution, the more you can leverage your existing system and corresponding data, the better equipped your greenfield products and applications will be. Running where the rest of your application is operating will only help with that. This is where technologies, such as ACI and AwS Fargate, are comparable but not quite the same to a Kubernetes-run serverless application. A serverless container like Azure Container Instances is raw infrastructure. While it’s a great way to easily run several containers , Brendan Burns explains in The New Stack why building complicated systems requires the development of an orchestrator to introduce higher-level concepts, like services, deployments, secrets, and more.
At this point, there’s no debating that Kubernetes has won the orchestration war. Developers are using it daily to build and scale large-software systems. By marrying serverless and Kubernetes, it helps users quickly understand how serverless works in the context of their own reality, utilize existing logging and monitoring setups as well as improve their troubleshooting skills.
An important caveat is that operationalizing serverless is non-trivial from a DevOps perspective, especially with regards to logging and monitoring – given that it’s stateless. Kubernetes specialists can bring their chops and tooling and make a bad situation better. Using serverless with Kubernetes can ensure a large portion of your developers can benefit rather than a small minority.
The value of running serverless on Kubernetes has become well understood among vendors and open-source providers. There are various solutions on the market that can help run serverless workloads on top of Kubernetes and the jury is out on which solution will emerge victorious. Some projects that come to mind are OpenWhisk by RedHat, Knative by Google (recently announced at Google Next), OpenFaaS, Virtual Kubelet, Kubeless, Fission, and the list goes on. It’s important to note that these projects are not apples-to-apples comparisons as they try to solve for different aspects of the central issue.
While I can’t predict which technology will become the de facto standard, there is good news. GitLab is focused on helping enterprises get from idea to production as fast as possible using a single application. That effectively means we will leverage the best open-source technologies to incorporate serverless to run on Kubernetes with GitOps.