Microservices Transition: From Monolithic Architecture to Scalable, Independent Services on AWS Cloud
This project demonstrates the initial deployment of a Node.js application as a monolithic architecture and its subsequent transition to a microservices architecture in the AWS cloud. The node.js application is a web API created using the Koa framework that serves various endpoints for handling requests related to users, threads, and posts. It hosts a simple message board with threads and messages between users.
Firstly, we built the Docker container image for our monolithic node.js application and created an Amazon Elastic Registry( Amazon ECR) repository in AWS and pushed the docker image to that repository via AWS CLI.
In this section, we have used Amazon Elastic Container Service (Amazon ECS) to instantiate a managed cluster of EC2 compute instances to deploy the docker image of our monolithic node.js application stored in the ECR Repository as a container to run on this cluster. Creating an Amazon ECS cluster is a fundamental step in deploying and managing containerized applications within AWS. An ECS cluster is a logical grouping of EC2 instances or AWS Fargate compute resources that provide the infrastructure for running tasks and services based on task defintion.
In this section, we have set up target groups and configured an Application Load Balancer to efficiently manage and route HTTP traffic to the task services which will be deployed within our ECS cluster instances.
In this section, we will define a task definition for our ECS cluster to facilitate the deployment of our containerized application. A task definition acts as a blueprint for our application, specifying the container image by using docker image URL stored in the ECR, resource allocations (such as CPU and memory), and networking configurations. By creating a task definition, we enable ECS to launch and manage tasks or services that utilize our container image. This configuration includes defining container ports, environment variables, and any necessary IAM roles or policies required for secure operations. Once the task definition is established, it will serve as the foundation for deploying and scaling our containerized application within the ECS cluster.
After creating the task definition, the next step is to create a service within the ECS cluster. An ECS service is responsible for managing the deployment and lifecycle of tasks based on the specified task definition. It ensures that the desired number of task instances are continuously running and healthy. The service integrates with the ECS cluster to handle task scheduling, load balancing, and scaling.
In this step, the Application Load Balancer (ALB) and target groups configured earlier are utilized to direct traffic to the service. The service configuration specifies the target group associated with the ALB, enabling the ALB to distribute incoming HTTP traffic across the running tasks. The ALB performs health checks on the tasks and ensures that traffic is only routed to healthy instances. Additionally, the service supports auto-scaling policies, leveraging the ALB and target groups to maintain application performance and availability by scaling the number of tasks in response to changing load conditions.
The application is successfully deployed on both AWS ECS and locally. The Node.js application consists of 3 services namely Users, Posts and Threads. The container image used in this monolithic implementation consists of all the three services bundled in one file. When /api/users endpoint which is one of the services endpoint route defined in the file is accessed, it returns a JSON array of users, as demonstrated in the attached screenshots. This confirms that the API is functioning correctly and serves user data as expected. The current implementation is a monolithic architecture where all features or services are bundled in a single application. In the next phase, this will be refactored into microservices, with separate services handling users, posts, and threads for better scalability and maintainability.
Working of the API on accessing /api/users endpoint via localhost development environment on Port 3000
In microservices architecture, each feature of a Node.js application namely Users, Posts and Threads runs as a separate service in its own container. This design allows services to be independently scaled and updated, improving resource management and deployment flexibility. The isolated nature of each service ensures that failures in one do not impact others, and different technologies can be used per service, fostering innovation and efficiency.
Previously, we deployed our application as a monolith using a single service and a single container image repository. To deploy the application as three microservices of users,posts and threads, we will need to provision additional three repositories (one for each service) in Amazon ECR. After provisioning repositories in ECR, we will break the node.js application into interconnected services namely users, posts and threads. content of folder "3-containerized-microservices" has dockerfile for each services of node.js application, we will push each of service's docker image to the respective Amazon Elastic Container Registry (Amazon ECR) repositories as shown below.
In this step, we will create task defintions for each of the services namely Users, Posts and Threads by using the respective container image URL from the ECR repostories as shown below.
In this step, we will create addtional three target groups one for each of users, posts and threads features of our application, wuth the same EC2 cluster targets as shown below.
Once the target groups are created, they must be associated with the active Application Load Balancer (ALB) to correctly route HTTP requests to the appropriate task services. This involves configuring the HTTP port 80 listener rules as follows:
-
- IF Path = /api/[service-name] THEN Forward to [service-name]`
- For example,
IF Path = /api/posts* THEN Forward to posts
-
/api*
forwards toapi
/api/users*
forwards tousers
/api/threads*
forwards tothreads
/api/posts*
forwards toposts
This configuration ensures that HTTP requests are directed to the correct service based on the specified path patterns. Also, Route with priority -1 which is /api* and the detault route points to the target group associated with the service running the monolithic container image and the rest of the routes redirect HTTP requests to their respective microservice.
Part-5 Creating and deploying additional Services on ECS Cluster referring to the above task definitions and target groups.
In order to deploy the three microservices (posts, threads, and users) to our cluster. we need to create additional three microservices with the corresponding task defintions associating the respective container image URLs and target groups.
Below is an image showing the configuration of service for Posts feature, which refers to task defintion created for Posts feature and the target group created for posts feature in the previous step.
Similarly, we created other services running on ECS cluster with the respective container images for threads and users as shown below.
In the final phase of transitioning from a monolithic architecture to a microservices-based setup, the traffic redirection is refined by deleting the ALB's listener rule that routes /api requests to the legacy monolithic target group service. This step involves deleting the specific listener rule for /api and updating the default listener rule to users target group (or any other service like posts or threads target groups, as they have a response to route /api endpoint by default) as shown in the below image.
Concurrently, In the monolithic service configuration we updated the number of tasks to set to zero, effectively decommissioning the monolithic service. This transition ensures that each microservice operates independently, with isolated failure domains and independent scalability. Consequently, a failure in one feature does not affect the others, enhancing overall system resilience and operational efficiency.
The output images below illustrate the successful migration to a microservices architecture.
Default route - "/api" or "/" response by microservice architecture
Before deploying Users feature microservice
After deploying Users feature microservice
Before deploying Posts feature microservice
After deploying Posts feature microservice
Before deploying Threads feature microservice
After deploying Threads feature microservice