- GCP dynamic inventory
- Uses gcp_compute inventory plugin
- Returns list of GCE instances with
ansible
label grouped by label value
- Build role
- Installs docker
- Build Docker image from copied Dockerfile
- Pushes built image to Docker registry
- Deploy role
- Installs gcloud, kubectl and helm
- Creates Helm values based on given environment
- Gets cluster credentials using provided service account
- Deploys Helm release from copied chart to cluster
- build.yml and deploy.yml playbooks are used to run roles
- ansible.dockerfile is used to build an image for Jenkins jobs
Templates:
- Secret containing given
htpasswd
file for authentication - Secret containing given TLS certificate and key
- Secret containing given service account key with permissions to GCS bucket
- Deployment
- 2 replicas
registry:2
image- Generated HTTP secret for coordinating uploads
- Environment variables from created secrets
- Service exposes pods on HTTPS port
443
and creates NEG for Load Balancer - Ingress creates Internal HTTPS Load Balancer with given static IP and TLS certificate from secret
- Helpers file contains templates for defining labels and selectorLabels for resources.
Installed by Helm chart with the next Values:
- config.yaml configures using JCasC plugin:
- Jenkins image built from jenkins.dockerfile. This image installs required plugins from plugins.txt and trusts Docker registry certificates
- Internal Ingress for accessing Jenkins and External ingress for GitHub webhook
- Users and their permissions
- Ansible agent
- credentials.yaml creates Jenkins credentials from corresponding Kubernetes secrets
- pipelines.yaml creates next Jobs and Pipelines using JobDSL plugin:
build-job
runs Ansible Build role on Ansible agentdeploy-job
runs Ansible Deploy role on Ansible agentbuild-pipeline
- Multibranch pipeline that runs build.jenkinsfile ondocker
andopenjdk
containers. Triggered on Pull Requests and push tomain
ordevelop
branches. If the build is successfuldeploy-pipeline
will be builtdeploy-pipeline
runs deploy.jenkinsfile ongcloud-helm
container built from gcloud-helm.dockerfile. Uses Active Choices plugin with Groovy script that returns Docker image tag list instead of Image Tag Parameter plugin that doesn't support self-signed certificates
Templates:
- Deployment
- Deploys given
spring-petclinic
image - If
mysql.cloudsqlProxy
is truecloud-sql-proxy
sidecar is deployed for connecting to Cloud SQL database on private IP by Cloud SQL Auth Proxy. Otherwisemysql
container is deployed. - Uses created
sql-proxy-sa
Kubernetes Service Account
- Deploys given
- Service exposes pods on HTTP port
8080
- Ingress creates Internal or External HTTPS Load Balancer depending on
loadBalancer.type
value. - Managed Certificate creates Google-managed SSL certificate for External Load Balancer
- Service Account is allowed to impersonate the IAM service account from annotation.
- Horizontal Pod Autoscaler automatically scales Deployment to match demand.
- Helpers file contains templates for defining labels and selectorLabels for resources.
I mostly used Google Cloud Foundation Toolkit Terraform modules for provisioning Google Cloud Platform resources:
- VPC
- Subnet with secondary ranges
- Firewall rule for internal subnet traffic
- Firewall rule for SSH connection from GKE cluster pods to
ansible
nodes - Cloud router and NAT
- Private GKE cluster with 2 node pools
- Service account with Cloud SQL Client role and Workload Identity User binding with Kubernetes Service Account
sql-proxy-sa
inpetclinic-ci
namespace - Service accounts:
ansible-control-node
- used by Ansible to create inventorydocker-registry-storage
- used by Docker registry to access GCS bucketgke-deploy
- used to deploy Helm charts to GKE clustersos-login
- used to connect to instances by SSH
ansible-managed-node
service account used by Ansible managed node GCE instances- Grant
os-login
service accountiam.serviceAccountUser
role onansible-managed-node
service account wireguard
service account used by Wireguard server GCE instance- Grant
os-login
service accountiam.serviceAccountUser
role onwireguard
service account - Grant
ansible-control-node
service accountcompute.viewer
role - Grant
os-login
service accountcompute.osAdminLogin
role - Grant
gke-deploy
service accountcontainer.developer
role - GCS Bucket with random name suffix
- Grant
docker-registry-storage
storage.objectAdmin
role on GCS bucket - Private Service Access for connecting to Cloud SQL by Private IP
- MySQL Cloud SQL instance, user and database
- Instance Template for running Ansible Build role
- Compute instance from Build Instance Template
- Instance Template for running Ansible Deploy role
- Compute instance from Deploy Instance Template
- DNS private zone
- Private addresses and corresponging DNS records of Docker registry, Jenkins and QA endpoints
- Regional public address for Wireguard server
- Global public address for spring-petclinic and Jenkins webhook endpoints
- DNS public zone and DNS record of spring-petclinic
- DNS forwatding policy
- Wireguard server Instance Template
- Wireguard server Compute Instance with startup script
- Firewall rule for connecting to Wireguard server
Install and trust TLS certificates using:
- Ansible playbook on all Ansible managed nodes
- Kubernetes DaemonSet on Kubernetes cluster
I used Wireguard VPN server to access my VPC. I chose Wireguard because I need client-to-site connection and it's relatively easy to setup compared to OpenVPN and other VPN protocols. I got free domain for spring-petclinic CI Load Balancer IP from Freenom Domain Provider.
Spring-petclinic image is built from spring-petclinic.dockerfile.
- Clone GitHub repo:
git clone https://github.com/Pienskoi/DevOpsProject.git
gcloud
,docker
,kubectl
,helm
,ansible
andwireguard
should be installed. You must be logged in togcloud
with the required permissions.- Run setup-pipeline script from repo folder:
cd DevOpsProject
chmod +x ./setup-pipeline
./setup-pipeline
You can specify variables by entering them one after the other after calling the script or passing them as arguments in the KEY=VALUE
style, or both:
./setup-pipeline PROJECT_ID=devops-project-12345 REGION=europe-west1
- Add nameservers from script output to domain config
- Create GitHub webhook with Push and Pull Request events to URL from script output
- Jenkins UI is accessible on
jenkins.project.com
on VPN - QA environment will be accessible on
qa.project.com
on VPN - CI environment will be accessible on provided global domain. You may need to wait some time before Google-managed SSL certificates are created and provisioned.
Security config:
Credentials:
Kubernetes Cloud Agents:
Jobs and Pipelines:
Deploy parameters:
Build pipeline:
Deploy pipeline:
Kubernetes Engine cluster:
Kubernetes Engine Workloads after deploying all environments:
Kubernetes Engine Ingresses after deploying all environments:
Load Balancers after deploying all environments:
Compute engine VM instances:
Cloud Storage Bucket with Docker registry images:
Cloud SQL instance:
Google-managed SSL certificate for External Load Balancer:
Freenom domain Nameservers config:
CI environment:
QA environment:
Alternative variant of CI/CD pipeline that replaces Jenkins and Docker registry with Cloud Build and Artifact Registry services located in cloud-build directory.
It uses the same Spring petclinic Helm Chart and Dockerfile. Terraform module and Setup script include changes described below.
- Package with Maven.
- Build Docker image with latest and shortened commit SHA tags.
- Package Helm chart.
- Retrieve Helm chart version from Chart.yaml file.
- Push Helm chart to Artifact Registry OCI repository.
- Trigger
deploy-ci
build if triggered on push tomain
branch. - Push Docker images from
images
list to Artifact Registry Docker repository. - Artifact Registry repository URL should be specified in
_ARTIFACT_REGISTRY_REPO
substitution.
- Create
values.yaml
file. - Deploy Helm release to GKE cluster in
petclinic-ci
namespace. - MySQL credentials, SQL proxy connection details and domain are secrets stored in Secret Manager.
- Required substitutions:
_IMAGE
- Docker image URL with tag stored in Artifact Registry_CHART
- Helm chart URL stored in Artifact Registry_CHART_VERSION
- Helm chart version_CLUSTER
- GKE cluster name where Helm chart will be deployed_CLUSTER_REGION
- GKE cluster region where Helm chart will be deployed_PRIVATEPOOL
- Cloud Build private worker pool ID
- Create
values.yaml
file. - Deploy Helm release to GKE cluster in
petclinic-qa
namespace. - Required substitutions:
_IMAGE
- Docker image URL with tag stored in Artifact Registry_CHART
- Helm chart URL stored in Artifact Registry_CHART_VERSION
- Helm chart version_CLUSTER
- GKE cluster name where Helm chart will be deployed_CLUSTER_REGION
- GKE cluster region where Helm chart will be deployed_MYSQL_DATABASE
- MySQL database name_MYSQL_USERNAME
- MySQL user name_MYSQL_PASSWORD
- MySQL user password_PRIVATEPOOL
- Cloud Build private worker pool ID
Removed: modules, resources and outputs related to Ansible, Jenkins and Docker registry.
Added:
cloudbuild-build
andcloudbuild-deploy
service accounts with needed permissions for corresponding Cloud Build builds.- Artifact Registry repository for Docker images and Helm charts.
cloudbuild-vpc
VPC network.- Peering of Cloud Build worker pool Google-managed network with created
cloudbuild-vpc
. - Export custom routes in GKE and Cloud Build peerings.
- Cloud Build Worker Pool.
project-vpc
tocloudbuild-vpc
Cloud HA VPN with advertised routes.- Cloud Build triggers:
build-push
GitHub push triggerbuild-pr
GitHub Pull Request triggerdeploy-ci
Manual triggerdeploy-qa
Manual trigger
- Secret Manager secrets and versions.
Removed: commands and arguments related to Ansible, Jenkins and Docker registry.
Added:
- Enabling of Cloud Build, Container Registry, Artifact Registry and Secret Manager services.
- Pushing Helm Cloud Builder Community image to Container Registry by submitting Cloud Build.
- Confirmation prompt that GitHub repository is connected to Cloud Build.
Triggers:
Build history:
Build details:
Images:
Charts: