From client-server to servers in internet data centers to cloud computing and now ….serverless.
Cloud computing enabled establishments to move their infrastructure from Capex to Opex, where companies could now hire their infrastructure instead of investing in expensive hardware and software.
When you hire infrastructure on the cloud, you are still committed to hiring instances, virtual or dedicated. Auto-scaling was probably the first move towards on-demand capacity where you could spin up server instances as and when your demand went up. This was effective; however, the minimum increment unit was a server instance.
In parallel, there was another movement where people were trying to figure out if they could do something more than virtualizing instances on a piece of hardware because there was still the task of managing and monitoring these instances. This paved the way for application containers where instead of creating separate instances of the OS for each application, they just made secure spaces for them to run while broadly sharing the OS resources.
Enter serverless computing
By adhering to some basic rules, services and applications can be deployed onto serverless systems. The infrastructure can create a container, execute the code, and clean up after that based on demand. Of course, this is a significantly simplified explanation, and the systems are way more complicated.
This completely eliminates dedicated servers or containers. Instead, you are billed for the time you use your computing resources, i.e., the time and resources consumed by your application to fulfill the request. Some of the top-rated serverless solutions are AWS-Lambda and Google-Cloud-functions. The new freedom to scale on-demand and the elimination of estimating load and traffic led to the massive adoption of serverless architectures. If things failed, it was NOT due to provisioning and capacity.
Having said this, one must tread cautiously when going in for serverless architecture. Some of the things to pay special attention to are
Cold start latency impacts customer experience – Code instances spin down when not used for some time, resulting in a cold start that affects app response. These are called cold instances instead of warm instances that are ready to run and handle service requests. This is fine in most normal applications, but certain measures need to be in place in specific use cases where the frequency of server calls is low or erratic, and the response time is critical.
Most providers provide a concurrency control such that you can control the number of warmed containers that can handle the request. This means these minimum containers are kept fired up and live so that they are ready to handle requests. Exercise caution while configuring this as the containers are billed irrespective of usage.
Serverless architecture appropriate for the complexity and scale – If you have a very simple app that does not require significant performance scalability, then a serverless architecture could be more expensive in terms of development and maintenance. In the scheme of the total cost of ownership, Resource costs always tower over the infrastructure costs. Man-power for server-based architecture is more readily available than serverless because of the recency of the technology.
Factor in the limited options to use open source – Using open source modules and libraries can be a considerable cost and time saver. But a lot of these are more suited to more dedicated and traditional environments, given the recency of the serverless architecture popularity. While there is no “serverless compliant” tag that you can use to choose, you have to probably try and rely on the community recommendations and experiences when selecting these libraries or modules.
Debugging and monitoring are typically more complex – Setting up local environments to simulate production can be challenging. Though monitoring is provided at a macro level, detailed monitoring is far more complicated than a dedicated app environment. However, you can achieve the same log functionality with some discipline and a little caution. Ensure you use log levels so that the logs entries are controlled. Provide a context as most of your logging will be central, and you will need to filter each function’s logs. Use a service like CloudWatch( on AWS ) or Cloud Monitoring (on GCP) to centrally log and analyze your logs and act on them.
On-Premise or Hybrid architectures are complex & expensive – Serverless architecture has a more sophisticated and complex infrastructure management system typically proprietary to the providers. If you require an on-premises deployment, especially in specific critical systems, the infrastructure setup could be much more complex to set up and manage and probably cost much more. In these cases, it is best to avoid Serverless solutions. However, if this is not really a criteria, then the most common practice is to use a development environment on the cloud itself. Use a service such as “AWS CloudFormation” or “google deployment manager” to enable your CICD process.
Lock-in to the provider – Serverless applications need a certain level of adherence which is custom to a provider and generally requires some amount of porting to move providers, not just in the applications but also in the data and other extensions. It is rare that companies will move providers. There is definitely a cost to the move if it comes to this. Before you choose a provider, it is important to evaluate the capabilities of the platform for current and future requirements (up to a year or two) of the application.
Additional security considerations are required – Given that your application is hosted on the cloud and accessible on common channels, security measures and systems must be carefully implemented and adhered to. With a serverless architecture, it becomes a little more complex given that additional steps need to be taken. Some of the things to watch out for are insecure configuration, Insecure 3rd party dependencies, DDoS attacks, inadequate monitoring, etc. Engaging services of security consulting companies specializing in serverless security is also an option.
Danger from runaway costs -Serverless billing can be complex to estimate for businesses. There are Max-memory size and function execution time whose product results in the metric used to compute the billing. But the former parameters are not always that simple to estimate, given the data changes as the application performs. This could be a cost-saving and also a bottomless pit. While there are some estimation techniques and methods, one additional measure is to track the billing on a weekly basis to preemptively take corrective action.
While all these potential risks exist, Serverless architecture is the future. Getting an expert opinion on the solution can give you a better understanding of the advantages and reduce the risks significantly.