JAX DevOps Blog

Post-container world

JAX DevOps, 14-17 May 2019
The Conference for Continuous Delivery, Microservices, Docker & Clouds

23 Jan 2019

This article aims to cut through the confusion that many users today face in considering serverless as a solution strategy in their organizations. It examines the benefits and common use cases for serverless computing, as well situations where serverless is not well suited and considerations in choosing a cloud platform. Armed with this information, architects and developers can begin to ask the questions that will contribute to achieving successful serverless adoption.

Serverless computing is becoming increasingly popular in cloud-native development as it affords many advantages to an organization in terms of cost, scalability and agility. But, serverless is not a silver bullet that can be used in all use cases. You need to be aware when to use it and when not to. Moreover, there are some trade-offs to make when deciding between a mega cloud and an open source alternative for your serverless platform.

Defining serverless

Many developers or teams are early adopters who already are using serverless functions deployed on a cloud vendor’s platform. These can be written with the purpose of getting some internal routine tasks done or as an architectural component of a larger solution. For the wider group of users not yet taking advantage of serverless computing, it is good to begin by agreeing on some of the terms.
According to the Cloud Native Computing Foundation (CNCF), Serverless computing refers to the concept of building and running applications that do not require server management. Here, applications are bundled as one or more functions and deployed to a platform that executes them based on the demand. In other words, computing resources are allocated to the functions only if there is a need for them to be executed. This is a major difference when compared with the way we deploy applications now.
Some vendors try to define solutions, such as email services, cloud storage service, etc. as also being serverless because users do not have to manage any servers. But, in this article and in general, the word “serverless” refers to the serverless functions mentioned above.

Serverless benefits

Serverless was ignited by the Lambda functions first offered by Amazon Web Services (AWS) in late 2014. People started seeing the convenience and began trying it out to get very basic use cases done. Now a recognized software development approach, serverless offers three main advantages to an organization: increased agility, native scalability and availability, and infrastructure cost savings. Let’s look at each of these a little closer.

Increased agility

In current software development and deployment approaches, developers cannot just code and survive. They need to consider where the code is deployed and whether anything else is required to support the deployment process, etc. For example, developers writing a microservice to be deployed in a Kubernetes environment need to write it by adhering to some microservice framework and then create the necessary Kubernetes artifacts to support the deployment. They also have to understand Kubernetes concepts, as well as figure out how to run the microservice and artifacts locally to test them. By contrast, with serverless functions, developers only need to worry about the business logic they have to write, which will follow a certain template, and then just upload it to the serverless platform. This speeds up things significantly and helps organizations to ship solutions quickly to production and rapidly make changes based on feedback, increasing the agility of teams and the organization.

Scalability and code/solution availability

With traditional approaches, developers and DevOps teams have to factor in the scalability and availability of the solutions they build. Considerations typically include the peak load expected, whether an auto-scaling solution is needed or a static deployment will suffice, and requirements for ensuring availability. Serverless platforms remove the need to worry about these factors, since they are capable of executing the code when needed, which is what matters in availability, and scale the executions with the demand. So, scalability and availability are provided by default.

Infrastructure cost savings

When we deploy a solution in a production environment, we generally need to keep it running 24×7 since in most cases it is hard to predict the specific time periods a solution needs to be available. That means there are many idle times between requests even as we pay for the infrastructure full time. For an organization that has to deploy hundreds of services the costs can escalate quickly. With serverless functions, it is guaranteed that your code will get computing resources allocated only when it is executed, which means you pay exactly for what is being used. This can cut costs significantly.

Common serverless use cases

The benefits from serverless are most typically seen in four types of use cases: event-driven executions, scheduled executions, idling services, and unpredictable loads or spikes.

Event-driven executions

There are many places in our solutions where we want to run a piece of code based on an event occurring somewhere. These events do not occur continuously. Instead, they are random and mostly based on some user actions. In such cases, rather than running a service forever with your code wrapped, you can easily use a serverless function. The only thing is, you need to plug the event source to the serverless platform to bridge the events and the function execution. One good example is executing code to do some image processing when a user uploads an image. Here, the event source is a file bucket, which should be plugged with the serverless platform such that the code is executed when the file is added.

Scheduled executions

Unlike the random nature of event-driven executions, with scheduled executions, we know exactly when we want to execute the code. For example, we might need to do some data processing every hour or every 30 minutes. There are alarm services available with serverless platforms that can execute your code at scheduled intervals, so that it doesn’t need to run continuously.

Idling services

You may have services that get requests at random intervals with idle times between these requests. Instead of keeping the service running, you can leverage a serverless function such that it executes only when a request arrives.

Unpredictable loads or spikes

If you cannot predict the load of the requests arriving at your service, traditionally there are two ways to be prepared. One is to deploy the service with resources that cater to the peak load, but this is a waste of infrastructure. The other is to deploy the service with resources to handle the average load and autoscale. But, autoscaling is not straightforward. You need infrastructure that supports it, and even then you may face architectural limitations in your service or app, such as clustering and sharing the state. In these scenarios, you can leverage serverless functions; since they always scale from zero, autoscaling is a native quality.

When serverless isn’t a fit

While serverless computing supports many scenarios, there are certain use cases where it is not well suited. This is particularly true if you have a service that handles millions of requests per hour (that means a continuous high load). To be sure, you could still use a serverless function to handle it. However, doing so will cost you a lot more than deploying it as a service.
Moreover, the response time can get affected because serverless functions have a problem with cold starts. Since there is no code running during the idle time, when a request arrives, your code has to be loaded to the runtime. So, the first request might get delayed a little bit. If you cannot tolerate high latencies, then you will have to use workarounds to keep the containers warm. But, then you are moving toward something more like traditional deployment.
Finally, serverless functions are unable to maintain a state within themselves, since their existence is not permanent. They just come and go. So, if a state needs to be maintained, it has to be in a database, which is not very performance friendly.

Serverless platform tradeoffs

As mentioned earlier, serverless computing was ignited by the AWS Lambda functions, and as of now, AWS is the leading provider of serverless functions on a public cloud platform. Google Cloud Platform, Microsoft Azure, and IBM also offer serverless functions on their public clouds. All of these cloud platforms also offer several supporting services for architecting serverless solutions, such as file buckets, notification services, and databases. There is value in having a complete platform and the ability to more easily design your solution. However, this comes with a big risk of cloud lock-in, since architecting the solution by leveraging the supporting services of one cloud makes it difficult to migrate to another cloud or your own data center.
The other option is to work with one of the open source serverless platforms available, such Apache OpenWhisk, Kubeless, Fission, and OpenFaaS. Although open source projects allow you to set up your own serverless platform, the downside is that a huge investment is needed to learn, set up, and maintain such a platform. Also, unlike the public clouds, the open source alternatives lack event sources, which is a problem for some use cases. The upside is that you have lot more flexibility than with the public cloud offerings.

Other serverless adoption considerations

Serverless functions go very closely with the microservices, since you can take out the business logic of a microservice and easily deploy it as a function. Therefore, if you already have a microservice architecture, it is not difficult to adopt a serverless architecture. That said developers will have to change their coding practices because serverless functions have to be written in a specific way following a template enforced by the serverless platform. But, they have the advantage of not needing to learn different frameworks and can just write the logic.
There are still improvements needed when it comes to tooling support for serverless functions. Although there are few good public serverless platforms, such as AWS, Google, and Azure, developers have to write their own solutions or use multiple third-party tools to easily deploy their functions to the platform. Debugging is another problem because the traditional debugging techniques will not be valid due to the complex runtime concepts. There are integrated development environments (IDEs) emerging to fill the tooling gap to a certain extent. But, there is a long way to go.
Many organizations now turn to serverless functions as a cost-effective replacement for simple middleware solutions. For example, in the past, they may have used an enterprise service bus (ESB) to read a file from a file location, transform its messages, and send it to some backend. Now they will simply use a file bucket and a serverless function to accomplish the same task. At the same time, serverless functions do not replace more advanced middleware capabilities due to the heavy coding required to support them.
The serverless landscape continues to rapidly evolve. Most recently it changed drastically with the introduction of Knative from Google. Until mid-July 2018, there were quite a few open source projects helping us to setup our own serverless platforms, and they used different architectures to support serverless behaviour for the functions. Knative came out with all the great features provided by these projects and also answered some other problems that they couldn’t. Basically, with Knative, anything can be run in a serverless manner.
Significantly, other serverless projects only have been capable of supporting functions to run in a serverless environment. Knative allows you to run your microservices, micro ESBs, micro API gateways, etc, in a serverless environment as well. So, now you don’t necessarily need to write functions to gain the infrastructure advantage from serverless computing. You can simply use Knative. At the same time, we need to note that Knative is still in the alpha stage. And, although it holds tremendous promise, it is not yet ready for enterprise production environments.

Conclusion

Serverless computing has become increasingly popular today in cloud native development bringing advantages such as increased agility, native scalability support, and infrastructure cost savings. Several use cases lend themselves readily to serverless computing, and developers now have several commercial public cloud offerings and open source serverless solution options. Finally, while developers are using serverless functions for simple tasks today, they are moving toward using serverless for more complex cases too–potentially leading to a battle with microservices. And who knows? Maybe serverless and microservices will merge as Knative starts redefining what serverless is. In either case, a solid understanding of both development approaches will provide an important foundation for the future development of applications and services.

Behind the Tracks

BUSINESS & COMPANY CULTURE
the process of becoming fully agile
CLOUD PLATFORMS
Cloud-based & native apps
DOCKER & KUBERNETES
Docker, Kubernetes, Mesos & Co
CONTINUOUS DELIVERY
Build, test and deploy agile
MICROSERVICES
Maximize development productivity
Business & Company Culture

Business & Company Culture

Cloud Platforms

Cloud Platforms

Docker & Kubernetes

Docker & Kubernetes

Continuous Delivery

Continuous Delivery

Microservices

Microservices

Monitoring & Diagnostics

Monitoring & Diagnostics