What is different between traditional infrastructure designs and serverless architectures.

Kobe
5 min readNov 17, 2021

--

In this article, I want to share with you the key concepts of the AWS Serverless Platform and explain a little bit about the differences between traditional infrastructure designs and serverless architectures.

AWS Serverless

Start to build a simple Web Service

So back to the history when we want to start building something like a web service where clients are talking to your service over HTTP/HTTPS. If you were to start to build this, you might start with a really simple architecture where you just deploy a web application on top of a single server such as an Amazon EC2 instance. You’d probably also set up a database of some sort to persist data, but overall this is a pretty basic deployment model to start with.

Simple web service version 1

So now the application’s deployed, but what happens when you start to get more users and more traffic? You can scale vertically to some extent but eventually, you’re going to outgrow the capacity of a single instance.

Add Horizontal Scaling

So to address this, you can horizontally scale the application by putting a load balancer in front of multiple instances.

Horizontal Scaling Version

And this is great, now you can easily add more servers to handle as much load as you need, but now there’s the issue of cost management.

Most applications don’t have a perfectly constant load on them at all times.

You might get more traffic midday during the week than you do in the middle of the night on a weekend for instance, and with the current architecture, you have to provision for your peak load and continue to pay for those servers even when they are idle.

But in a cloud environment, this problem is easy enough to solve. You can simply add an Auto Scaling group to automatically provision and terminate instances based on your application’s traffic patterns.

So now you can handle all the traffic when your service is under heavy load, but minimize your costs when it’s not getting as much traffic.

Now, in addition to elastically scaling your app, you probably also want to ensure it’s highly available and fault-tolerant.

You want to make sure that you’re spreading your instances across multiple availability zones in order to ensure that, even if there’s a failure of a single data center, your users will still be able to access your service.

This is also easy to implement. You just make sure the load balancer and auto-scaling group are configured to spread your instances across availability zones.

Horizontal Scaling With High Availability

Horizontal Scaling With High Availability

So what is the next problem?

The problems are you need to monitor your cost, your OS, Runtime, out of memory, runtime exception, hard disks, availability zone mapping, scaling…

So a lot of thinking and effort you need your team to monitor and handle it why we don’t give all that problems to AWS control? and we only focus on what is Business needs and we can delivery to the market asap

So make it work with AWS serverless platform.

When we refer to something as servereless there are really 4 characteristics we’re talking about.

First, it means you never have to deploy or manage hosts.

You don’t have to deal with operating system, or runtime patching, or any of the other concerns that you would normally have to deal with if you had a fleet of servers that were under your control.

Next is this concept of flexible scaling.

The services that are part of the AWS serverless platform either scale automatically in response to the load you’re sending them, or they allow you to define capacity in terms of the unit of work in question.

So instead of having to scale in terms of number of cpu cores, or system memory, you get to define capacity in terms of the total amount of work being done; the amount of data sent to a stream, or reads and writes made to a database for instance.

In this way, serverless provides a much more natural way to define the scaling parameters of your application.

The next characteristic is automated high availability and fault tolerance.

As we looked at in that original architecture diagram in the more traditional set up, distributing your application deployment across multiple availability zones & removing single points of failures is a very important concept within cloud architecture.

One of the great things about the serverless platform is that all of that is taken care of for you automatically.

There’s not even a check box to check to say “I want this service or this deployment to be highly available”. It just is by default. There’s no way to turn it off.

And the last piece is looking at Lambda specifically.

You never pay for idle capacity.

You‘re only charged for the total invocations of your function, and the amount of time that your function is actually running.

So if you handle a request and it takes you three hundred milliseconds to process, you’re only going to get billed for that three hundred milliseconds.

Then, if you don’t receive any more requests after that, there’s no ongoing cost for the function if it’s not being used.

This means that you no longer have to think about capacity management and elastic scaling of your application. Utilization and cost management at that level is now AWS’s problem.

This makes capacity planning a much easier exercise because you never have to worry about whether you are over or under provisioned, you only pay for the capacity you use.

Serverless With AWS Lambda

So you can get a prototype of a simple web service that’s capable of scaling up and handling thousands or more requests per second in a day or two in a lot of cases.

In fact, we’ve seen customers get their first exposure to serverless because someone decided to spend a weekend solving a problem they had been struggling with for months using their traditional stack.

While the direct infrastructure and operations savings we discussed are great we often hear from customers that this enhanced agility is the biggest win they see when they adopt serverless.

--

--

Kobe

I’m working at KMS-Technology company. I love code (▀̿Ĺ̯▀̿ ̿) — Full Stack Software Engineer