The evolution to serverless and where we stand today.

Alex Boten
8 min readDec 4, 2018

This is an adaption of a talk that I gave at the Squamish Technology Meetup last month. The slides are available here and the talk is available here

Call it serverless, Function-as-a-Service or cloud functions, the serverless architecture is capturing a lot of momentum. It took me a while to grasp the power of serverless architecture. It was a little over two years when Kris Foster shared this article on the subject with me. After diving into it on and off for the past eighteen months, I’m convinced that this architecture solves a lot of different problems.

Simplified history of serving content

A long time ago, folks decided that it would be cool to share information. Early on, this was utilized to connect machines via a network by universities, this started almost fifty years ago as ARPANET. In 1989 Tim Berners-Lee released a public proposal which later became the “world wide web”. A few years later, the information super highway was born. If you did not know this, or have forgotten, this is what news outlets used to call the Internet.

I still remember my parents and I not having a clue about this the first time I heard of it

Browsers started becoming available that could render content that was being served by a computer somewhere else. It was really cool, because all of the sudden, anyone could serve content using from their own homes. Of course this came with a some limitations:

  • speeds were slow
  • when your sibling’s friends call, your server would go offline causing an complete outage for your service
  • IP addresses would change all the time

Still home computers were great for pet projects. But what about companies? Well they started hosting servers in their offices or garages. The challenge to make that information available at all times from anywhere in the world was on.

Bare-metal

This created an opportunity for companies to start offering what was called colocation services. Their mission was simple:

  • provide power
  • provide space
  • provide a network

If you were lucky enough to have a bunch of capital kicking around and a network somewhere nearby, this was a no brainer. Of course, quite a few companies built datacenters for their own needs with dedicated hardware.

Deploying applications on bare-metal requires a lot of work:

  • first thing is to find space, internal to your company or in a colo facility
  • buy and ship the hardware, then rack it either yourself or via professional services
  • install and configure the operating system
  • install the network tools for remote management and make sure these never fail. Otherwise it’s a roundtrip to the colo. Now tools like IPMI allow access to the hosts even without a network, but it wasn’t always this easy
  • install the software and configuration needed for serving the application
  • finally it’s time to install the application code

This created the need for organizations to dedicate entire teams to the process of running the operations of those servers.

Virtual machines

A few years later, companies that started out small and grew significantly, found the need to deploy and manage hundreds and eventually thousands of servers. Companies that started out as a small online bookstore or a search tool, needed to scale significantly to keep up with the demands of their users. Folks started thinking about running multiple environments on a single host in order to:

  • improve utilization of the hardware
  • simplify the deployment and management of the servers
  • ensure the security posture of one environment wouldn’t impact another

This provided new opportunities for service providers to offer a compute platform to their users. Datacenter providers started supporting Virtual Machines (VM). No more shipping hardware around to bring up a down nodes, simply request a new instance and you’re off to the races.

Deploying applications on virtual machines becomes a little bit better:

  • still have to install and configure the operating system, but in a VM. Nowadays most providers support shipping a pre-built image via a variety of tools.
  • install the software and configure needed to serve the application
  • time to install the application code

So, it is a bit simpler, but this still requires a lot of configuration and setup before being able to deploy the application code.

Containers

Fast forward a few more years. The Internet is part of everyday life, even produce have their own websites to tell you how amazing they taste. The Internet just absolutely everywhere. Many different avenues were investigated for ways to simplify the management of applications at scale. Virtual environments, virtual hosts within VMs, freebsd jails, solaris containers/zones to name a few. Sometime in 2008, Linux Containers made their appearance as LXC containers.

A few years later, Docker made its splash. Many still argue about the merits of what Docker did for containers. Whether they just got all the glory of containers because of their cute mascots. Or whether it was because of the tooling they provided to simplify the packaging and deployment of containers. Or whether they just invested a lot of work to build a solid open source community. Either ways, before Docker, containers just hadn’t taken off.

Another opportunity arises here for infrastructure providers. They can now provide a platform that allows users to deploy containers, once again lowering the barrier of entry.

Deploying in containers looks like this:

  • create a container that will include an immutable runtime environment for your application as well the application code. In Docker, this is done via a Dockerfile. If you haven’t seen one, it’s a small amount of metadata that defines the contents of a container
  • build the container and deploy it via a container orchestrator or a Platform-as-a-Service. Any number of Platform-as-a-Service provider will allow you to deploy your application in minutes

Containers proved it was possible to reduce the scope of what developers needed to worry about when deploying applications. They provided a simpler interface to shipping code directly into production. They also made it possible for platform providers to get creative. Platforms could improve the scalability of users’ applications. But what if developers could focus on even less?

Serverless

With serverless, it’s finally possible for application developers to really focus on the application code itself. In essence, a developer only needs to write an event handler in a single file of code to take advantage of serverless.

Example using AWS Lambda

This code is then deployed by the platform provider into a runtime environment that supports the specific language. Providers already support a variety of languages.

Deploying to serverless is as simple as writing application code:

ServerLess means…

  • Less code: it’s possible to ship the entire code in a single file. As long as you’re developing in a runtime that’s supported, you’re good to go.
  • Less servers: application developers no longer have to think about where their code is running. This reduces the amount of configuration that needs to be managed outside of the application itself.
  • Less time to ship: serverless offers the possibility to deploy code in seconds. It reduces the investment of time required to test theories and deliver value.
  • Less expensive: the cost of serverless is calculated per function invocation. You only pay for what you use. This also means that you no longer need to run all those processes when your application isn’t in use.
  • Less time to react: scaling functions is much simpler for both application developers and platform providers. Functions respond to events, the more events, the more functions can be launched.

More challenges…

  • More orchestration: the complexity of the overall system doesn’t go away because we’re using serverless. Instead of a single complex application, we now have many simple functions in a complex distributed system.
  • More latency: because applications are formed of tens or hundreds of micro services, the latency between each one has to be accounted for. Expect the network to fail.
  • More integration testing: testing the entire system becomes more complex. In many cases, it’s nearly impossible to replicate a full system in a development environment.
  • More observability: understanding the overall health of a distributed system is a complex problem. This isn’t a problem that’s specific to serverless architecture, but it’s definitely exacerbated by it. Configuring monitors, metrics, tracing and event logging becomes a necessity in order to identify problems in production.

Function-as-a-Service providers

There are many ways to get started with serverless code. The easiest way to get started is to sign up with one of the cloud providers. Whether you’re familiar with node, golang or python, most languages are already supported. Last week, Amazon announced the ability for users to bring their own runtime which increases the flexibility of serverless yet again. Here are just a few of the providers with links to getting started for each:

This will save you some time setting up an environment. Each provider has a wealth of information on getting started if you just search for it. Most cloud providers have a “f*cking around”, “no idea what I’m doing” or “free” tier to allow users to get started at no cost.

Go deeper

If you’re looking dive a little deeper in the stack, there’s a number of open source projects that will let you set up your own environment:

Tradeoffs

In the end there’s always tradeoffs for any technology choice. Serverless is no different. Serverless is just another tool. Application developers have to ask themselves whether or not it makes sense for their use-case. I definitely think it solves many cases.

--

--

Alex Boten

Passionate about the environment and making the world a better place