Serverless Architecture in AWS

Serverless Architecture in AWS

Business Case Example of Serverless Architecture

Here we try to describe a business case where we experienced performance issues with ec2/RDS combination and optimized it by deploying AWS Serverless Architecture.

We run a medical fitness app on AWS cloud – mainly app code is on ec2 and database is hosted on AWS RDS. Before the covid-19 pandemic, the web traffic was regular and predictable. We faced minimal disruption and downtime. But during the lockdown, we observed that during a certain time of the day there was a sudden huge spike in web traffic which resulted in service downtime in that particular period.

In detail, the sudden spike in traffic was observed regularly during morning time (6-9 am), after that for the rest of the day the traffic was minimal, less than one-tenth of the morning traffic. During that morning time, the ec2 and RDS service became overwhelmed. The ec2 was a t2 instance, its CPU burst credit was quickly exhausted and thereafter the performance degraded. Same for RDS service. It became unresponsive. As a result, there was regular business downtime during peak traffic. So we were searching for a solution that would be scalable instantly and cost-effective at the same time. We found that going Serverless would fit the bill.

We explain in detail the steps that were taken to optimize the architecture.

We modified our entire stack with AWS Fargate, API Gateway, AWS Lambda, and Aurora Serverless.

  • In place of the ec2 server, we used AWS Fargate with Docker images pulled from AWS ECR. It provides a couple of advantages, we can schedule multiple Fargate tasks during peak web traffic. Also, AWS automatically provisioned the infrastructure in the background and we can scale the number of tasks during minimal traffic. So we only pay for the time the tasks are executed instead of provisioning multiple ec2 instances which run all the time and also maintaining them. Here AWS takes care of the server and the app code is securely pulled from the private ECR repo using IAM authentication. So there is minimal maintenance cost.

 

  • We used API Gateway and lambda to provide API key-based authentication and also a usage plan so only authenticated users access the specified APIs. For example, customers access the front-end API’s and admin users access the admin API. So in case of a particular API traffic spike that alone can be throttled in isolation instead of all APIs going down in case of ec2. It is also cost-effective as it is pay per request and we can also use a usage plan and request quota to throttle the API in cases the usage limit is breached. One can also use AWS WAF (Web Application Firewall) to protect against SQL injection attacks, DDOS attacks, etc. And also geolocation-based access to particular APIs. But it is not cheap.
  • We deployed lambda because of its ability to scale automatically to millions of invocations instantly during peak traffic. We can also use reserved and provisioned concurrency in case there is sudden unanticipated web traffic. It is cost-effective as it is pay per request and duration-based.
  • In place of normal RDS, we used Aurora Serverless. The biggest advantage of Aurora Serverless is that it can automatically scale up or down depending on workload, which would require time-consuming capacity adjustments using Amazon RDS. With Aurora Serverless, the database can be configured to quickly and automatically increase or decrease capacity as needed. So it is a good fit for applications with steep and unpredictable spikes in usage as it automagically provisions whenever there are requests and shut down in idle, so we can pay per request and not for idle time. We also do not need to worry about maintenance.

So we see that in our business app, deploying AWS Serverless services helped us in cost-effective, scalability, and performance on demand.

Building a Serverless Back-end with AWS

In this demo, we are going to use AWS services like Amazon Aurora serverless, Lambda function, API Gateway, and SNS topic.

Step 1: First we are going to create Aurora serverless database.

 1.1  –   Open AWS console and navigate to the Amazon RDS database. Select on Create database.

1.2 ­- On the Database engine, select Amazon Aurora.

1.3 – For the edition select Amazon Aurora with PostgreSQL compatibility.

1.4 – On Database features select Serverless.

1.5 – Type a name for your DB cluster, database-1.

1.6 – Next select a username and password for your database.

1.7 – After that for maximum Aurora capacity unit select 2.

1.8 – Next select the VPC for where you want to create the database.

1.9 – After that click on Additional connectivity configuration.

1.10 – Select default value for Subnet group

1.11 – On the VPC security group Click on create new and in the name box type aurora-tutorial

1.12 – Next enable the Data API

1.13 – Next click on Additional configuration. On Enable deletion protection, uncheck this.

1.14 – After that click on Create database.

Now we need to retrieve the Cluster ARN. For this –

1.15 – Go to the RDS console and select your database and click on your database name.

1.16 – In the configuration tab copy the ARN and keep it in a safe place.

Now we will go to connect our database.

1.17 – Click on Query Editor.

1.18 – Select the database-1, enter Postgres as the database username and input the database password you created earlier, then type Postgres for the database name.

1.19 – Next click Connect to the database.

1.20 – After that create a database tutorial with the query – CREATE DATABASE tutorial;

1.21 – Next click on Change database.

1.22 – Change the database to a tutorial that we just created.

1.23 – After that create a table with this query :

CREATE TABLE sample_table(received_at TIMESTAMP, message VARCHAR(255));

Next, we have to copy the secret ARN. For that go to AWS Secret Manager.

1.24 – Click on the Secret name.

1.25 – Copy the Secret ARN and need to keep in our hand.


Step 2: Now we will create our Lambda Function.

2.1 – Go to the Lambda dashboard and click on Create function.

2.2 – Next inside our Lambda dashboard select Author from scratch, enter a name to our Function, and in Runtime select language Python 3.8.

2.3 – After that In the permissions tab, click on Change default execution role and select Create a new role with basic Lambda permissions. And click on Create function.

2.4 – After create the function replace by the sample code.

2.5 – Replace the cluster_arn and secret_arn values with the Cluster ARN and Secret ARN values from the previous steps.

2.6 – Next, click on File and Save.

2.7 – And after that deploy your Lambda function by clicking on Deploy button.

Step 3: Create an Amazon SNS topic

3.1 – In a new tab go to the SNS Dashboad. In Topic name enter your SNS topic name and click on Next step.

3.2 – Leave all the fields default and click on Create topic

3.3 – After Creating the topic copy the SNS ARN and keep it in your hand.

Step 4: Configure Elastic Container Service and create Fargate

4.1 – Go to ECS dashboard. Click on Clusters and then Create Cluster.

4.2 – Select your Cluster template Networking only Powered by AWS Fargate.

4.3 – Give your Cluster a name. And in networking in Create VPC check the box and click on Create

4.4 – After creating the Cluster the dashboard looks like this.

4.5 – Next select Tasks bar and click on Run new Task

4.6 – In your Run Task dashboard select Launch type FARGATE and leave all sections default.

4.7 – In VPC and security groups select your default VPC and Subnets.

4.8 – Next in Auto-assign public IP select ENABLED and click on Run Task.

4.9 – After creating the task select Repositories from left panel

4.10 – In the Repositories click on Create repository

4.11 – Next in General settings select Visibility settings Private and give your repository a name.

4.12 – Next leave all the section and click on Create repository.

Step 5: Create API Gateway

5.1 – Go to your API Gateway dashboard and click on Create API

5.2 – Next select REST API, click on Build.

5.3 – Next give your API a name and then click Create API.

5.4 – After creating the API click on Actions on the top and select create method.

5.5 – Next select GET method from the drop down menu.

5.6 – After that in Integration type check on Lambda Function, select your Lambda region, and provide your Lambda Function name which you created earlier. And then click on save.

5.7 – After that it looks like this.

5.8 – Next on the left panel click on Usage Plans.

5.9 – Click on Create

5.10 – Next give your usage plans a name, uncheck Throttling and Quota and Click on Next.

5.11 – Next click on Add API Stage

5.12 – Select your API and Stage that you have created earlier. Next click on add button and click Next.

5.13 – Next click on API Keys from left panel.

5.14 – Click on Actions and select Create API key. Give your API Key a name and check API Key Auto Generate. And click Save.

5.15 – You can see your API key.

5.16 – Next click on Add to Usage Plan. And select your Usage plan. Click on add.

5.17 – You can see this

5.18 – Whenever you change in your API Gateway console you need to deploy your API every time. Now need to Deploy the API.

5.19 – Go to Resources in left panel and click on Actions and select Deploy API.

5.20 – Next click on Deployment stage drop down menu and select your Deployment stage and then click on Deploy button.

Step 6: Subscribe AWS Lambda Function Amazon SNS topic

6.1 – Go to the Lambda dashboard and click on your Lambda function.

6.2 – Click on Add trigger

6.3 – Type SNS and and select your SNS topic from drop down menu and paste your SNS ARN in SNS topic box.

6.4 – Enable the trigger and click on Add button.

Step 7: Add API Gateway to Lambda Function.

7.1 – Go to Lambda Function Dashboard And click on Add trigger.

7.2 – From the drop-down menu select API Gateway.

7.3 – Next Select your Existing API, Deployment stage > dev, in Security select API key.

7.4 – Next click on Add button.

7.5 – You can see your triggered SNS topic and API Gateway from the Lambda console.

 

Now our setup is done.

We use a Fargate Task to deploy a docker container where the docker image is pulled from AWS ECR.

This image contains a PHP code.

This image calls a GET request to the API Gateway with an API key which we created earlier in API Gateway section.

Now we are showing the final output. Go to the ECS console > Clusters > Select your Cluster > Tasks > Click on your Tasks > Copy Public IP.

Now copy the Public IP and paste it in your local machine. You will see the following output.

Thank You.

Leave a Reply

Your email address will not be published. Required fields are marked *