How Serverless Computing Overcomes the Hardships of Microservices

There has been a lot of excitement about microservices. So there should be. This blog post explains all of their benefits, so they will not be repeated here.
But implementing a microservices architecture brings about a lot of pain points and traditional computing technologies - virtual machines and containers - have not been that helpful in removing them. Thankfully, the serverless computing model is designed from the ground up to be "microservices-first", and help overcomes the hardships they inflict on teams working in traditional computing environments.
But why are microservices so hard to implement?
At first glance, microservices just bring a lot of development and management overhead. When a single monolithic service is broken into multiple microservices, a simple function call to another module in the same service has to be replaced with a network request of some kind. This brings so many potential issues with it:

  • What if a dependency cannot be reached over the network? Every service has to be able to gracefully handle this case or else the entire system becomes more brittle.
  • What about the added latency of the request and response? What was just a function call in the same process is now a network request potentially having to travel across an entire data centre, or even the world.
  • What if a new version of a microservice is deployed that behaves in a way its dependents do not expect? A lot of care must be taken to ensure that one service does not end up breaking its dependents.
  • What used to be able to be hosted on 1 or 2 servers is now lots of services each needing their own server. This increases the cost of hosting, as well as the work needed to run and monitor this more complex system.

Proponents will argue that the benefits outweigh any issues these difficulties cause, and they are right.  Thankfully, the serverless computing model helps easily to solve these issues with very little effort.

  • Serverless functions are small. There is no HTTP-request handling middle-layer. So they are blazing fast to invoke. This negates the added overhead of having to invoke separate services over a network.
  • Serverless functions on AWS at least are never overwritten when deployed, but new versions are created; every deployed version is therefore around and thus one may deploy a breaking change confident that the old version is still being used by those that are not yet compatible with the new version.
  • The serverless computing model only makes you pay for what you use, regardless of how many microservices you deploy. Also, the service provider takes care of all the hosting, and will probably do a better job than virtually every other team in the world. This removes the need to worry about cost and complexity from breaking down a service into microservices.

Having to make services resilient and expect failure in their dependencies is something that cannot be removed by a different cloud computing model; it requires a change of mentality by developers. However, consider how we have adapted to accept the overhead of writing tests for our code, having planning meetings for our requirements. This is just another skill to be taught. By making monolithic applications unfeasible to deploy, the serverless model will force the development community to embrace this design-for-failure approach instead of avoiding it, until it just becomes another required skill for software developers to learn.