Serverless Computing, Simplified!

Rajesh Dangi / May 23, 2018

No kidding, The serverless is actually not actually serverless, it certainly needs the underlying infrastructure such as compute, memory and storage but the pains of managing this infrastructure and capacity thereof is totally outsourced and managed by your cloud service provider, It is thus all about compute runtime, used for modern distributed applications that significantly or fully incorporate third-party, cloud-hosted applications and services, to manage server-side logic and state and most of times don't even provide storage. (its persistent cache from microservices / neighboring pods that's largely replacing the working data and stickiness to transactions)


The concept was floated in-line with NoOPS, (no operations) is the concept that an IT environment can become so automated and abstracted from the underlying infrastructure that there is no need for a dedicated team to manage hardware and associated software tools chains in-house. Since everything operates statelessly, which means that you as developer have to have a lot more structural overhead to make your application 'flow'. If you have to save a variable and pass it to another function, you can't save that to your system memory, you have to store it in a DB or another table and then 'chain' the events together.


Most serverless vendors offer compute runtimes, also known as function as a service (FaaS) platforms, which execute application logic but do not store data. FaaS is an extremely recent development, first made available to the world by in October 2014, followed by AWS Lambda,[Google Cloud Functions, Microsoft Azure Functions, IBM/Apache's OpenWhisk (open source) in 2016 and Oracle Cloud Fn (open source) in 2017 which are available for public use. FaaS capabilities also exist in private platforms, as demonstrated by Uber's Schema-less triggers. etc. The use cases do exist and exist at scale, the models and engineering and deployments vary but the underlying principle remains the same. The availability and scaling is solved by design.




So, What's the big deal?


Serverless takes a new paradigm in the way applications are build and deployed, what we saw in the virtualization space followed by microservices and DevOps is now getting granular at code base i.e. functional level. Famous book "Building Microservices" written by Sam Newman mentions. "Within a monolithic system, we fight against these forces by trying to ensure our code is more cohesive, often by creating abstractions or modules. Cohesion – the drive to have related code grouped together – is an important concept when we think about microservices". This is reinforced by Robert C. Martin's definition of the Single Responsibility Principle, which states ..


"Gather together those things that change for the same reason, and separate those things that change for different reasons."


So if a nested class ( classes are supposed to be collections of functions that operate on common variables) combines multiple functions that change due to different reasons, will impact entire class rather than the granular change required only at specific function. for example in supply chain the billing, printing and storing the transaction can be done by different functions but tied down to same class since they all use same variables, now the tax regime changes will need effective changes in the billing, the format of the reports might change based on the entities and storing of this transaction might change if the database schema undergoes a design refresh, all these changes are independent and should be done in isolation yet needs to be touched upon as they are tied to same class and share same code-line space.



Now imagine if a micro services design principle is applied by embracing fine-grained, microservice components, the same code can deliver results faster and help reduce stickiness to servers, databases and tiered north-south or east-west workflow paths truly liberating and helping embrace newer technologies. Microservices thus give us significantly more freedom and openness to react and make dynamic decisions, allowing us to scale randomly and interact via APIs rather than linear hardcoded connections between application modules and databases. The Serverless now changes the game even further and liberates these functions from the encapsulating microservices and react to triggers, execute on demand and stay dormant till next call of action is triggered by transaction behaviour. Microservices are small but not smaller as in case of serverless functions. Key benefits of using serverless architecture includes automatic scale up and down in response to current load, choice of any language, service or mix and the associated cost model that charges only for milliseconds of compute time used when running.



"The distributed serverless computing platform able to execute application logic (Actions) in response to events (Triggers) from external sources (Feeds) or HTTP requests governed by conditional logic (Rules). It provides a programming environment supported by a REST API-based Command Line Interface (CLI) along with tooling to support packaging and catalog services" a perfect narration by Apache OpenWhisk, an open source distributed serverless cloud platform.


Redefining the computing horizon


Since the gap of management and scaling between Serverless FaaS and hosted containers narrow, the choice between them may just come down to style and type of application. For Serverless FaaS scaling is automatically managed, transparent, and fine grained, and this is tied in with the automatic resource provisioning and allocation. Although we have kubernetes horizontal pod autoscaling feature the serverless makes it by default without any additional configurations thus changing the rules of the game. FaaS is seen as a better choice for an event-driven style with few event types per application component, and containers are seen as a better choice for synchronous-request-driven components with many entry points.


Our understanding of how and when to use Serverless architectures is still in its infancy throwing all kinds of ideas at Serverless platforms and seeing what works and what does not due to multiple limitations such as tooling, distributed monitoring, remote debugging, state management, deployment / packaging / versioning etc. The concern about how one service can call the correct version of another service or running multiple distributed tools and orchestration in runtime with correct versioning, granular monitoring and integration test case executions are blind spots till yet.


Serverless might as well not the correct approach for every problem, so tough to say it will replace all of your existing architectures or legacy applications or even be the trend settle for majority of use cases. There are many advances happening in the field and in near future it will be fascinating to see how Serverless fits into our architectural toolkit.


Benefits & Trade-offs

Let us quickly summarize the benefits and trade-offs of serverless computing, since this is use case driven few point assume subjectivity and might get obsolesce as the diverse and dynamic feature evolution and convergence might bridge few gaps or fade away few technologies..


  • Manageability Outsourced - Serverless Architecture allows developers to focus on the code that matters to application and not surrounding or other undifferentiating aspects like designing for scalability, high availability, O/S management, capacity management etc.  
  • Low cost & faster Time-to-Market - Lower investment required for inception and building a new application since reusability and communality is the fundamental tenet. Lower Subscription charges as only those compute resources that are needed or run are billed and don't have to pay anything when the code is not executing.  
  • Reduced Deployment Complexity - Packaging and deploying a simple application developed for a Serverless framework involves a compile and upload operation, and is much simpler when compared to traditional packaging & deployment done for an entire server managing versions and patches for OS and other runtime tools environments running on that server.
  • Improved User Experience, Geolocation, and Reduced Latency - Serverless has ability to become a standard approach to offloading functions to execute closer to end users and devices. With Serverless, providers have points of presence near every user, and apps perform equally well for everyone. 


  • Third-Party Vendor Locking - A major trade-off for utilizing Serverless Architecture is the dependence on third party vendors and services, leading to lack of control on concerns like system downtime, unexpected limits, cost changes, loss of functionality, limited choice of component upgrades, forced API upgrades etc.   
  • The Hidden Costs - While Serverless may lead to reduced operations costs, development costs and application complexity are increased. Projects that in the past would have been developed by small to medium sized teams, now require many small teams and significant product and team management complexity to move forward as each function and runtime code is managed differently yet coherence of the workflow, data management and runtime stateful-ness limitations.
  • Increased Development and Testing Complexity - Serverless Architecture brings along with complexity of splitting a single application into a fabric of services and functions, which significantly increases with the number and variety of services. Similarly running integration tests and performing load testing for a Serverless platform is hard, since there is a dependency on externally provided systems for running testing scenarios.


In summary, Serverless Architecture has set the stage for a level changing field of technology and vendor lock-in that used to be a concern the early days of software development when Open Source was relatively unknown and organizations had to completely rely on vendors for their offerings. We will witness large scale transition as more and more reference use-cases of Serverless architecture turn into actual running implementations, and start addressing the business realities with scales and volumes one expects as a turning point. Needless to say, the possibilities are endless.