Skip to content

This project sets up a CI/CD pipeline for a Flask application using PostgreSQL, GitHub Actions, Jenkins, SonarQube, unit tests, Bandit, Docker, Aqua Trivy, DockerHub, Docker Compose, ArgoCD, Terraform, Ansible, Grafana, and Prometheus to ensure secure, automated deployment and continuous monitoring

License

Notifications You must be signed in to change notification settings

MatveyGuralskiy/FlaskPipeline

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

logo

FlaskPipeline

Automated Application deployment system using tools such as Flask, PostgreSQL, Docker, Jenkins, Terraform,
ArgoCD, Kubernetes, Grafana, Prometheus, Ansible and more

FlaskΒ  AWSΒ  DockerΒ  JenkinsΒ  TerraformΒ  ArgoCDΒ  AnsibleΒ  KubernetesΒ  GrafanaΒ  PrometheusΒ  GitHubΒ  GitΒ 


LinkedIn . Report Bug Β· My Website

πŸ” About the Project

This project establishes a complete CI/CD pipeline for a Flask application using PostgreSQL as the database. It is designed to ensure high-quality code, secure deployments, and efficient monitoring. The pipeline incorporates secure password management using hashing and salting. The application code is hosted on GitHub, with Jenkins orchestrating the CI/CD process. Here’s a breakdown of how it works:

Source Code Management:

The application code is stored on GitHub. A webhook triggers a Jenkins job whenever a new commit is pushed. Continuous Integration:

Jenkins initiates the job and performs several tests: SonarQube: Conducts code quality analysis. Unit Tests: Validates the functionality of the code. Bandit: Executes security checks. Docker and Security Scanning:

After successful testing, Jenkins builds a Docker image. Aqua Trivy: Scans the Docker image for vulnerabilities. Docker Image Deployment:

The Docker image is uploaded to DockerHub. Docker Compose verifies the container configuration. Kubernetes and Infrastructure Management:

ArgoCD: Sets up a Kubernetes cluster. Terraform: Deploys the application and provisions resources on AWS. Configuration Management:

Ansible: Manages updates and configurations. Monitoring and Notifications:

Grafana and Prometheus: Provide monitoring for the application and infrastructure. Email notifications are sent if any issues arise. This setup ensures that the application is rigorously tested, secure, and automatically deployed to a scalable environment, with continuous monitoring and alerts for any issues.

πŸ“Ί Preview

Demonstration Video

Continuous Integration

The Continuous Integration pipeline begins with the developer's contribution. Developers write and commit code for a Flask application to a GitHub repository, which includes security measures such as SQL injection protection and hashing functions.

GitHub and Jenkins

Upon committing the code, a GitHub webhook triggers the Jenkins pipeline. Jenkins is the cornerstone of our CI process, orchestrating several crucial steps to ensure code quality and security:

  1. Unit Testing: The pipeline kicks off with unit tests to verify the functionality of the code.

  2. Security Analysis: Bandit, a tool for security code analysis, scans the code for vulnerabilities.

  3. Code Quality Analysis: SonarQube assesses the code against coding standards and metrics, providing a comprehensive analysis report.

  4. Docker Image Build: The application is containerized using Docker, creating a Docker image.

  5. Vulnerability Scanning: The Docker image is scanned using Trivy to identify potential security vulnerabilities.

  6. Docker Compose Testing: The image undergoes further testing using Docker Compose.

  7. Infrastructure as Code (IaC): Terraform scripts are used to provision the necessary infrastructure on Amazon AWS: VPC, VPC Peering, Instances, EKS Cluste, Nodes, Policies, Security groups, Auto-Scaling Groups

  8. Ansible is responsible for updating Worker Nodes once every 3 days, which it performs late at night, he does this using a crontab

Continuos Deployment

The Continuous Deployment process is initiated by the DevOps engineer, who oversees GitHub Actions workflows. These workflows automate the deployment process. The DevOps engineer can always choose which version of the application to deploy.

GitHub Actions and Argo GitHub Actions interacts with the ArgoCD-FlaskPipeline repository. Any change in the Helm chart version triggers the deployment pipeline, managed by Argo. The deployment process includes several critical components:

  1. Service Management: Kubernetes services manage traffic routing to the appropriate pods.
  2. Deployment Management: Kubernetes deployments manage the lifecycle of pods, ensuring the correct number of replicas are running.
  3. Auto-Scaling: The system automatically scales pods based on demand, ensuring high availability and performance.
  4. AWS Load Balancer Controller: Manages incoming requests, directing them to the appropriate services.
  5. Application Load Balancer: Ensures secure communication by redirecting HTTP traffic to HTTPS.

All deployments ensure zero downtime for our application. ArgoCD manages Helm Chart resources, allowing for their recreation or reconfiguration as needed

GitOps in Project

GitOps, in simple terms, is a way to manage and automate your infrastructure and application deployments using Git as the single source of truth.

ArgoCD is a GitOps operator that monitors the Git repository for changes.

When changes are detected, ArgoCD ensures that these changes are applied to the Kubernetes cluster, deploying the updated version of the FlaskPipeline application

πŸ‘£ Steps of Project and Detail Demonstration

  • Clone Repository
  • Change Terraform files
  • Create S3 Buckets for Terraform Remote State
  • Create Resources with Terraform in Frankfurt
  • Connect to Master Instance with SSH
  • Sign Up to Jenkins
  • Install Plugins for Jenkins Pipeline
  • Create Credentials for Jenkins Pipeline
  • Create env file
  • Create Credentials for Jenkins in SonarQube
  • Create SonarQube Local Repository with Token
  • Modified Jenkins System setting
  • Create Jenkins Pipeline with Git
  • Add Webhook to GitHub
  • Choose Input for Deployment in Jenkins
  • Configure Ansible Master
  • Open Port 22 and 9100 to Security group of EKS Nodes
  • Prometheus Confirgurations
  • Grafana Configurations inside Instance
  • Import Dashboard for Grafana
  • Edit Query of Grafana Metric
  • Create Alert Rule
  • Attach email, Notification policy
  • Connect to the Cluster and Create Policy with IAM role
  • Create AWS Load Balancer Controller
  • Create ArgoCD
  • Create Private Repository for your Project
  • Configure ArgoCD with your GitHub Private Repository
  • Modified Values and Ingress files: VPC, Subnets, SSL Certificate
  • Run GitHub Actions with Version 1.0
  • Create Project in ArgoCD
  • Update Version in GitHub Repository
  • Run GitHub Actions with new Version

Clone Repository

  • Clone Repository Install Git to your PC Git
git clone https://github.com/MatveyGuralskiy/FlaskPipeline.git

Change Terraform files

  • Change Terraform files Go to Repository you're copy, Terraform --> Build

Change file variable.tf:

Edit default to your values

  • variable "Remote_State_S3_Dev" - Enter your Unique S3 Bucket to Save Terraform Remote State Files

  • variable "Remote_State_S3_Prod" - Enter another Unique S3 Bucket to Save Terraform Remote State Files

Now go to Development Directory

Change file main.tf:

  • line 19: Your S3 Bucket name of Dev

Change file variable.tf:

  • variable "Key_Name" - Enter your Key Pair Name you're Created in Frankfurt Region in AWS

The last step, go to Infrastructure directory

Change file variable.tf:

  • variable "Key_SSH" - Enter your Key Pair Name you're Created in Virginia Region in AWS

Change file main.tf:

  • line 32: Your S3 Bucket name for Production

  • line 132: Your S3 Bucket name of Dev

  • line 381: Change Your Key Pair name in Virginia

  • lines 502-539: Change all Route53 data

Create S3 Buckets for Terraform Remote State

  • Create S3 Buckets for Terraform Remote State

Install Terraform to your PC Terraform

Go to Repository --> Terraform --> Build

Now Inside the directory Build run CMD

Don't forget to edit S3 Bucket names to Unique!

# To initialize
terraform init
# To Plan Terraform
terraform plan
# To Apply and Create Resources in AWS
terraform apply -auto-approve


Create Resources with Terraform in Frankfurt

  • Create Resources with Terraform in Frankfurt

Go to Directory Terraform --> Development

# To initialize
terraform init
# To Plan Terraform
terraform plan -out=tfplan
# To Apply and Create Resources in AWS
terraform apply -auto-approve tfplan

Resources we're created in AWS:


Connect to Master Instance with SSH

  • Connect to Master Instance with SSH

Use MobaXterm for SSH Connection to Master Instance

Create Session and take Public IP of Master Instance, Username will be ubuntu and choose also your Private Key


Sign Up to Jenkins

  • Sign Up to Jenkins

Use Public IP of Master Instance and Insert him to your Browser and attach to Public IP port 8080 Public IP:8080

Go to your Master Instance and copy secret password from file

sudo su
nano /var/jenkins_home/secrets/initialAdminPassword
# Copy Secret Password and Insert it in Browser

Install Plugins and Register


Install Plugins for Jenkins Pipeline

  • Install Plugins for Jenkins Pipeline

Go to Manage Jenkins --> Plugins --> Available Plugins

Install Plugins:

  • Pipeline Utility Steps

  • SonarQube Scanner

After Installation Restart Jenkins Server


Create Credentials for Jenkins Pipeline

  • Create Credentials for Jenkins Pipeline

Go to Manage Jenkins --> Credentials --> Add Credentials

You need to create Credentials:

  • dockerhub - Your DockerHub Account (Type: Username and password)

  • github - Your GitHub Account (Type: Username and password)

  • gmail - Your Google Account Credentials App Password, Not a regular password of your Account (Type: Username and password)

  • aws-access - Your IAM User Credentials of Access key (Type: Secret text)

  • aws-secret - Your IAM User Credentials of Secret key (Type: Secret text)

Create env file

  • Create env file

Create env file in your PC, inside the file enter:

SECRET_KEY=YOUR_PASSWORD
SQLALCHEMY_DATABASE_URI=postgresql://postgres:YOUR_PASSWORD@PRIVATE_IP_DATABASE/flask_db

Check Private IP of Database and Insert to the env file

Now create Credential:

  • secret-env - Upload your env file to Jenkins (Type: Secret file)


Create Credentials for Jenkins in SonarQube

  • Create Credentials for Jenkins in SonarQube

Go to SonarQube Server on Port 9000 --> Public_IP:9000

Login:

Login: admin
Password: admin

Now we need to create Token for Jenkins to Login to your SonarQube

Administration --> Security --> Users --> Update Token --> Jenkins --> generate

Copy them and create in Jenkins Credential of it

  • sonarqube - Your SonarQube Token (Type: Secret text)

Create SonarQube Local Repository with Token

  • Create SonarQube Local Repository with Token

Now we need to create Local Repository for SonarQube Testing

Projects --> Manually --> Set Up --> Locally --> Generate

Copy Token you got and attach it in Jenkins Credentials

  • sonar-project - Your SonarQube Token Repository (Type: Secret text)

Modified Jenkins System setting

  • Modified Jenkins System setting

Now we need to attach our email for Notifications from Jenkins and add SonarQube Server to Jenkins

Go to Manager Jenkins --> System

Let's Start with SonarQube:

Scroll to SonarQube servers title --> Add SonarQube

Edit Name and attach Credentials of SonarQube Token: sonarqube

After that we should edit out Email settings:

Scroll to Extended E-mail Notification

Here we need to configure SMTP with Gmail

SMTP server:
smtp.gmail.com
SMTP Port:
465
Advanced:
Credentials of gmail
# Choose: Use SSL

You need to edit also an Email Notification

SMTP server:
smtp.gmail.com
Advanced:
# Choose: Use SMTP Authentiocation
Username: Your Email
Password: Your App password (not a regular gmail password of your account)
# Choose also: Use SSL
SMTP Port: 465
# Your can Test your configuration if you want by sending an email

After everything click to Save and Apply


Create Jenkins Pipeline with Git

  • Create Jenkins Pipeline with Git

Go to Main Dashboard --> New Item --> Pipeline

Now Discard old build if you want

in GitHub Project insert Project url of GitHub Repository

Build Triggers choose GitHub hook trigger for GITScm polling

And finally in Pipeline Definition choose Pipeline script from SCM

SCM - Git

Repositories your Repository Url of GitHub Repository

Choose in Credentials github

Branch should be */main of what ever you want

And in Script path enter: Jenkins/Jenkinsfile.groovy


Add Webhook to GitHub

  • Add Webhook to GitHub

Go to your GitHub Repository --> Settings --> Webhooks

In Payload URL enter: http:PUBLIC_IP_JENKINS:8080/github-webhook/

Contact type: application/json

Just the push event and Active it

After that change something in Repository and make push of commit to check if Jenkins Webhook works

Choose Input for Deployment in Jenkins

  • Choose Input for Deployment in Jenkins

When you push commit from Git to GitHub, Jenkins will see it and Start a Job

Jenkins Input will ask you if you want to make Deployment choose Yes and in the second Input we will ask you again if you're sure about it, so choose also 'Yes'

Now you got all Infrastructure with Terraform in Virginia


Configure Ansible Master

  • Configure Ansible Master

Now use SSH connection with MobaXterm to Ansible Master, copy Public IP of Ansible master or use Bastion Host Instance (Upload to Bastion Host Secret Key)

Attach your AWS Credentials with command

aws configure

ACCESS KEY
SECRET KEY
REGION: us-east-1
FORMAT: json

Create Crontab task now every 3 days on 2AM

crontab -e
# Choose number 1
#Enter this command:
0 2 */3 * * ansible-playbook -i /home/ubuntu/FlaskPipeline/Ansible/ansible-aws_ec2.yml /home/ubuntu/FlaskPipeline/Ansible/Playbook.yaml >> /path/to/ansible_cron.log 2>&1

Move Private Key to Ansible Master and Change his Permissions

cd .ssh/
Upload it with MobaXterm buttom at the left of Console
chmod 600 YOUR_KEY.pem


Open Port 22 and 9100 to Security group of EKS Nodes

  • Open Port 22 and 9100 to Security group of EKS Nodes

Go to EC2 Console in Virginia

Choose every EKS Node (Without Name tag)

Click on Security --> Security Group

Now in Security group add Inbound Ports 22 and 9100 to 0.0.0.0/0 (to everyone)


Prometheus Confirgurations

  • Prometheus Confirgurations

Copy Public IP of Prometheus Instance and connect to him with MobaXterm

Now we need to change Prometheus Configuration file to attach our Nodes IP's

sudo su
nano /etc/prometheus/prometheus.yaml
# Go to EC2 Console and Copy every Private IP of Nodes
# Your Prometheus file should looks like that:
global:
  scrape_interval: 15s

scrape_configs:
  - job_name: "prometheus"
    static_configs:
      - targets: ["localhost:9090"]

  - job_name: "nodes"
    static_configs:
      - targets:
          - 10.0.4.224:9100 #Change here to Nodes Private IP's
          - 10.0.4.116:9100
          - 10.0.4.128:9100
          - 10.0.3.18:9100
          - 10.0.3.168:9100
          - 10.0.3.42:9100

# Save everything and Restart Prometheus
systemctl restart prometheus

Now go to Prometheus Server to check if Targets of Instances are Okay use Public IP of Prometheus Instance and Port 9090

Prometheus_IP:9090

Go to targets and check Instances


Grafana Configurations inside Instance

  • Grafana Configurations inside Instance

Now connect also with SSH to Grafana Instance

We need to edit our Grafana configuration file first of all to add SMTP

sudo su
nano /etc/grafana/grafana.ini
# Ctrl + /
# Enter line: 900
# Edit lines:
enabled = true
user = "YOUR EMAIL"
password = "YOUR APP PASSWORD ONLY"
# Delete ";" from this lines and from lines:
from_address
from_name
# After That restart the Grafana server
systemctl restart grafana-server

Our Grafana Server is ready for usage

Import Dashboard for Grafana

  • Import Dashboard for Grafana

Go to Public IP of Grafana Instance and Port 3000

Public_IP:3000

Login: admin
Password: admin

Now go to Dashboard --> New --> Import

Enter this code and Click Load


Edit Query of Grafana Metric

  • Edit Query of Grafana Metric

Go to your new Dashboard and find CPU Basic and click to Options --> Modified

Change Metrics and leave only one metric and click Save and Apply


Create Alert Rule

  • Create Alert Rule

Now we need to create Alert Rule for our Grafana

Click on Dashboard Metric we edited before --> More --> New alert rule

Now edit all Metrics like here, if CPU highly than 80% we will get email notification

Attach email, Notification policy

  • Attach email, Notification policy

Go to Alerting --> Contact points --> Edit default email

You can also test it

Create Notification Policy

with CPU equal 80


Connect to the Cluster and Create Policy with IAM role

  • Connect to the Cluster and Create Policy with IAM role

You can connect to your Cluster from every Linux OS, for example I run it on my VM

Before you start you need to enter your AWS Credentials to VM

Install kubectl, argocd, helm and eksctl on your VM

# Install kubectl
curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version --client

# Install ArgoCD
curl -sSL -o argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
chmod +x ./argocd
sudo mv ./argocd /usr/local/bin/argocd
argocd version

#Install Helm
curl -LO https://get.helm.sh/helm-v3.12.0-linux-amd64.tar.gz
tar -zxvf helm-v3.12.0-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm
helm version

# Install EKSctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version

Now to connect to your Cluster and create Policies use this commands:

# Connect to Cluster
aws eks update-kubeconfig --name EKS-FlaskPipeline --region us-east-1
# Create Policy
eksctl utils associate-iam-oidc-provider --region=us-east-1 --cluster=EKS-FlaskPipeline --approve
# Install My IAM JSON Policy from directory Policy
aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://alb_controller_iam_policy.json
# Install IAM Service Account Policy, Change to Your Account ID
eksctl create iamserviceaccount \
    --cluster=EKS-FlaskPipeline \
    --namespace=kube-system \
    --name=aws-load-balancer-controller \
    --attach-policy-arn=arn:aws:iam::YOUR_ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy \
    --approve
# If he already exist attach to the command: --override-existing-serviceaccounts

Create AWS Load Balancer Controller

  • Create AWS Load Balancer Controller

Let's create AWS Load Balancer Controller on our EKS Cluster

helm repo add eks https://aws.github.io/eks-charts
helm repo update
# Copy your VPC ID
aws eks describe-cluster --name EKS-FlaskPipeline --query "cluster.resourcesVpcConfig.vpcId" --output text

# Insert VPC to your command
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system   \
  --set clusterName=EKS-FlaskPipeline \
  --set serviceAccount.create=false \
  --set region=us-east-1 \
  --set vpcId=YOUR_VPC_ID \
  --set serviceAccount.name=aws-load-balancer-controller

Check if you have AWS Load Balancer Controller

# To see Deployment
kubectl get deployment aws-load-balancer-controller -n kube-system
# to see Pods
kubectl get pods -n kube-system

Create ArgoCD

  • Create ArgoCD

We finally at the stage to create ArgoCD in our Cluster

# To install ArgoCD to the Cluster
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl get all -n argocd
# It's List of Secrets
kubectl get secret -n argocd
# Save your Secret, it's gonna be your Password for Connection to ArgoCD
kubectl get secret -n argocd argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
# Change ArgoCD to LoadBalancer to get Domain
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
kubectl get svc -n argocd
# For the project resources
kubectl create namespace flaskpipeline-project


Create Private Repository for your Project

  • Create Private Repository for your Project

Install all files from Directory ArgoCD_Repository to your Private Repository

After that we need to create Secret for GitHub Actions

Go to GitHub Main Settings --> Your Profile --> Developer Settings --> Personal Access Tokens --> Token Classic --> Generate

Now Copy the Secret and Create Secret inside ArgoCD_Repository Secret with name GITHUBACTIONS

Configure ArgoCD with your GitHub Private Repository

  • Configure ArgoCD with your GitHub Private Repository

Follow the Domain of Load Balancer for ArgoCD

Now connect to ArgoCD

Login: admin
Password: SECRET_YOU_GET_BY_COMMAND

Go to User Info and Change Password that you want, Refresh ArgoCD

Now we need to connect our Private Repository with SSH

Create SSH key and Public Key connect to your Private Repository

Go to Deploy Keys And attach Public Key here

In ArgoCD insert Private Key SSH

Use SSH Link of Repository for ArgoCD


Modified Values and Ingress files: VPC, Subnets, SSL Certificate

  • Modified Values and Ingress files: VPC, Subnets, SSL Certificate

Go to ArgoCD Private Repository you create and change in Kubernetes/Ingress.yaml Certificate ARN

Go to AWS Console --> Certificate Manager --> Requests --> Copy ARN of SSL Certificate

After this you need to check our Public Subnets ID's and VPC id and Enter them to the Kubernetes/values.yaml


Run GitHub Actions with Version 1.0

  • Run GitHub Actions with Version 1.0

After everything go to ArgoCD Private Repository --> Actions --> Update Version for Helm Chart --> Run Workflow --> Choose your Version of Application to Deploy

Now files values.yaml, Chart.yaml in Kubernetes directory were changed to the new version

Create Project in ArgoCD

  • Create Project in ArgoCD

To make deployment of Application we're going to use ArgoCD

Run ArgoCD in your Browser --> Applications --> New App

Application Name: flask-pipeline (with lowercase letters)
Project Name: default
Sync Policy:
- Prune Resources
- Self Heal

Source:
Repository Url: (Private Repository with SSH)
Revision: HEAD
Path: Kubernetes

Destination:
Cluster Url: https://kubernetes.default.svc

Helm:
Values file: values.yaml

And Click to Create App

After Deployment you can check Load Balancers in Virginia in your AWS Console

Create Route53 Record

  • Go to Route53 --> Hosted Zone --> Your Hosted Zone --> Create Record

Choose some subdomain for your Application our use www

Record Type: A

Alias: Yes Endpoint: Alias to Application Load Balancer and Classic Load Balancer Region: Virginia Choose Application Load Balancer of Application

Create Record

For example I created Domain name for my Application: web.matveyguralskiy.com

Update Version in GitHub Repository

  • Update Version in GitHub Repository

To update your Application edit in Main Repository index.html Version, dockercompose.yaml Version, and Jenkinsfile Version of Docker Application

Push new Update to your GitHub with Git and Merge to the Main Branch

After that Jenkins will start a new Job Automatically

In Input of Jenkins choose "No" for Terraform to not change anything in Terraform Remote State

How my DockerHub looks after Deployments

Run GitHub Actions with new Version

  • Run GitHub Actions with new Version

Go to Private Repository and Run GitHub Action and in Runworkflow choose new Deployment Version for example: V2.0

ArgoCD checks your Repository every 3 minutes, so he will see, that you have a new push and will run ArgoCD

It's my Grafana Dashboard of Worker Nodes

Redirect from HTTP to HTTPS of Application Website


πŸ“± Application for Docker Image Built With

  • Flask
  • HTML
  • CSS
  • JavaScript
  • PostgreSQL
  • SQL injection solve
  • Hashing function + solt

πŸ“‚ Repository

|-- /Ansible

|-- /Application

|-- /ArgoCD_Repository (Private Repository files)

|-- /Bash

|-- /Database

|-- /Docker

|-- /FifOps

|-- /Jenkins

|-- /Monitoring

|-- /Policy

|-- /Screens

|-- /Terraform

|-- /.gitignore

|-- LICENSE

|-- README.md

πŸ“š Acknowledgments

Documentations for you to make the project

πŸ“’ Additional Information

I hope you liked my project, don’t forget to rate it and if you notice a code malfunction or any other errors.

Don’t hesitate to correct them and be able to improve your project for others

πŸ“© Contact

Email - Contact

GitHub - Profile

LinkedIn - Profile

Instagram - Profile

Β© License

Distributed under the MIT license. See LICENSE.txt for more information.

About

This project sets up a CI/CD pipeline for a Flask application using PostgreSQL, GitHub Actions, Jenkins, SonarQube, unit tests, Bandit, Docker, Aqua Trivy, DockerHub, Docker Compose, ArgoCD, Terraform, Ansible, Grafana, and Prometheus to ensure secure, automated deployment and continuous monitoring

Topics

Resources

License

Stars

Watchers

Forks