Skip to content

Latest commit

 

History

History
179 lines (111 loc) · 17.8 KB

README.md

File metadata and controls

179 lines (111 loc) · 17.8 KB

Microservices Transition: From Monolithic Architecture to Scalable, Independent Services on AWS Cloud

This project demonstrates the initial deployment of a Node.js application as a monolithic architecture and its subsequent transition to a microservices architecture in the AWS cloud. The node.js application is a web API created using the Koa framework that serves various endpoints for handling requests related to users, threads, and posts. It hosts a simple message board with threads and messages between users.

image

Implementing Monolithic Architecture on AWS Cloud

Part-1 Building a docker image of our Node.js application and uploading it to amazon ECR

Firstly, we built the Docker container image for our monolithic node.js application and created an Amazon Elastic Registry( Amazon ECR) repository in AWS and pushed the docker image to that repository via AWS CLI.

ECR_create



Amazon_ECR_repo_image



Amazon_ECR_docker_image

Part-2 Creating an Amazon Elastic Container Service Cluster.

In this section, we have used Amazon Elastic Container Service (Amazon ECS) to instantiate a managed cluster of EC2 compute instances to deploy the docker image of our monolithic node.js application stored in the ECR Repository as a container to run on this cluster. Creating an Amazon ECS cluster is a fundamental step in deploying and managing containerized applications within AWS. An ECS cluster is a logical grouping of EC2 instances or AWS Fargate compute resources that provide the infrastructure for running tasks and services based on task defintion.

ECS_Cluster_config_1

ECS_Cluster_config_3

ECS_Cluster_config_4

ECS_Cluster_config_5

Part-3 Configuring load balancer and Target Groups

In this section, we have set up target groups and configured an Application Load Balancer to efficiently manage and route HTTP traffic to the task services which will be deployed within our ECS cluster instances.

Part-3.1 Creating a Target Group for ALB

Target_group_Config_1

Target_group_Config_2

Target_group_Config_3

Target_group_config_4

Target_group_config_5

Part-3.2 Creating an Application Load Balancer reffering to the above target group

Load_Balancer_Config_1

Load_Balancer_Config_2

Load_Balancer_Config_3

Load_Balancer_Config_4

Part-4 Task Defnition

In this section, we will define a task definition for our ECS cluster to facilitate the deployment of our containerized application. A task definition acts as a blueprint for our application, specifying the container image by using docker image URL stored in the ECR, resource allocations (such as CPU and memory), and networking configurations. By creating a task definition, we enable ECS to launch and manage tasks or services that utilize our container image. This configuration includes defining container ports, environment variables, and any necessary IAM roles or policies required for secure operations. Once the task definition is established, it will serve as the foundation for deploying and scaling our containerized application within the ECS cluster.

Task_definition_Config_1

Task_definition_Config_2

Task_definition_Config_3

Task_definition_Config_5

Task_Definition_Config_6_imp

Part-5 Creating a Service referring to above task definition

After creating the task definition, the next step is to create a service within the ECS cluster. An ECS service is responsible for managing the deployment and lifecycle of tasks based on the specified task definition. It ensures that the desired number of task instances are continuously running and healthy. The service integrates with the ECS cluster to handle task scheduling, load balancing, and scaling.

In this step, the Application Load Balancer (ALB) and target groups configured earlier are utilized to direct traffic to the service. The service configuration specifies the target group associated with the ALB, enabling the ALB to distribute incoming HTTP traffic across the running tasks. The ALB performs health checks on the tasks and ensures that traffic is only routed to healthy instances. Additionally, the service supports auto-scaling policies, leveraging the ALB and target groups to maintain application performance and availability by scaling the number of tasks in response to changing load conditions.

Task_Service_Config_1

Task_Service_Config_2

Task_Service_Config_3

Task_Service_Config_4

Task_Service_Config_5

Task_Service_Config_6

Task_Service_Config_7

Task_Service_Config_8

Part-6 Output

The application is successfully deployed on both AWS ECS and locally. The Node.js application consists of 3 services namely Users, Posts and Threads. The container image used in this monolithic implementation consists of all the three services bundled in one file. When /api/users endpoint which is one of the services endpoint route defined in the file is accessed, it returns a JSON array of users, as demonstrated in the attached screenshots. This confirms that the API is functioning correctly and serves user data as expected. The current implementation is a monolithic architecture where all features or services are bundled in a single application. In the next phase, this will be refactored into microservices, with separate services handling users, posts, and threads for better scalability and maintainability.

Working of the API on accessing /api/users endpoint via localhost development environment on Port 3000

Node js_working_in_local

API response on accessing /api/users endpoint via AWS Load Balancer DNS URL on Port 80
AWS_ECS_Working

Transitioning from Monolithic to Microservices Architecture on AWS Cloud

In microservices architecture, each feature of a Node.js application namely Users, Posts and Threads runs as a separate service in its own container. This design allows services to be independently scaled and updated, improving resource management and deployment flexibility. The isolated nature of each service ensures that failures in one do not impact others, and different technologies can be used per service, fostering innovation and efficiency.

Microservices_snip

Part-1 Provisioning additional ECR Repositories and uploading respective service docker images

Previously, we deployed our application as a monolith using a single service and a single container image repository. To deploy the application as three microservices of users,posts and threads, we will need to provision additional three repositories (one for each service) in Amazon ECR. After provisioning repositories in ECR, we will break the node.js application into interconnected services namely users, posts and threads. content of folder "3-containerized-microservices" has dockerfile for each services of node.js application, we will push each of service's docker image to the respective Amazon Elastic Container Registry (Amazon ECR) repositories as shown below.

ECR_microservices_repository

Part-2 Creating Task Defintions for each of the services

In this step, we will create task defintions for each of the services namely Users, Posts and Threads by using the respective container image URL from the ECR repostories as shown below.

Microservices_Task_defintions

Part-3 Configuring target groups and ALB listener rules

Part-3.1 Adding target groups

In this step, we will create addtional three target groups one for each of users, posts and threads features of our application, wuth the same EC2 cluster targets as shown below.

Target_group_update_elb_yet
Part-3.2 Updating ALB listener rules

Once the target groups are created, they must be associated with the active Application Load Balancer (ALB) to correctly route HTTP requests to the appropriate task services. This involves configuring the HTTP port 80 listener rules as follows:

Load_Balancer_5_Listener_rules
  1. Path-based Routing: Define rules to forward requests based on the URL path:
    • IF Path = /api/[service-name] THEN Forward to [service-name]`
    • For example, IF Path = /api/posts* THEN Forward to posts
  2. Rule Order: we applied the rules in this sequence:
    • /api* forwards to api
    • /api/users* forwards to users
    • /api/threads* forwards to threads
    • /api/posts* forwards to posts

This configuration ensures that HTTP requests are directed to the correct service based on the specified path patterns. Also, Route with priority -1 which is /api* and the detault route points to the target group associated with the service running the monolithic container image and the rest of the routes redirect HTTP requests to their respective microservice.

Part-5 Creating and deploying additional Services on ECS Cluster referring to the above task definitions and target groups.

In order to deploy the three microservices (posts, threads, and users) to our cluster. we need to create additional three microservices with the corresponding task defintions associating the respective container image URLs and target groups.

Below is an image showing the configuration of service for Posts feature, which refers to task defintion created for Posts feature and the target group created for posts feature in the previous step.

Task_Service_Config_Posts

Similarly, we created other services running on ECS cluster with the respective container images for threads and users as shown below.

Final_micro_Services_running_image

Part-6 Microservices Deployment

In the final phase of transitioning from a monolithic architecture to a microservices-based setup, the traffic redirection is refined by deleting the ALB's listener rule that routes /api requests to the legacy monolithic target group service. This step involves deleting the specific listener rule for /api and updating the default listener rule to users target group (or any other service like posts or threads target groups, as they have a response to route /api endpoint by default) as shown in the below image.

ALB_all5_microservice_listener_rules_config

Concurrently, In the monolithic service configuration we updated the number of tasks to set to zero, effectively decommissioning the monolithic service. This transition ensures that each microservice operates independently, with isolated failure domains and independent scalability. Consequently, a failure in one feature does not affect the others, enhancing overall system resilience and operational efficiency.

final_delete_monolithic_service_update_step1

final_delete_monolithic_service_update_step2

final_delete_monolithic_service_update_step_complete

Part-7 Output

The output images below illustrate the successful migration to a microservices architecture.

Default route - "/api" or "/" response by microservice architecture
Microservices_LB_Output_default

Before deploying Users feature microservice

LB_output_before_deploying_users_task_microservice

After deploying Users feature microservice

LB_output_after_deploying_users_task_microservice

Before deploying Posts feature microservice

LB_output_before_deploying_posts_task_service

After deploying Posts feature microservice

LB_output_after_deploying_posts_task_microservice

Before deploying Threads feature microservice

LB_output_before_deploying_threads_task_microservice

After deploying Threads feature microservice

LB_output_after_deploying_threads_task_microservice