This repository serves as an example project where you can experiment with different "stacks" using Terramate following generally good design practices.
CI Pipeline Related:
- Github Actions
- AquaSec TFsec
- Infracost
Infra-as-Code and Orchestration Related:
- Terraform
- Terraform Cloud
- Terramate
- Pluralith
Kubernetes Related:
- ContainerD
- Helm
- Karpenter
- IRSA using OIDC
- AWS Load Balancer Controller
- VPC CNI
- Amazon EBS CSI Driver
- External Snapshotter Controller
- Prometheus Operator Stack
- metrics-server
- KubeCost
AWS Services:
- VPC
- NAT Instance Gateways
- Identity and Access Management
- EKS - Managed Kubernetes Service
- EC2 Instances
- Launch Templates
- Autoscaling Groups
- Elastic Load Balancers
- KMS - Key Management Service
Some of the documented components/services in the diagram have yet to be added. (See Available Technologies/Tools above)
- Therefore, they will be missing until they are added to the project.
When provisioning the "dev" stack (stacks/dev
) by default it's set to "remote" (Terraform Cloud) for backend state storage.
- You will need to modify the
tfe_organization
global variable in thestakcs/config.tm.hcl
file with your Organization ID. - You can also opt to use the "local" backend storage by setting the global variable
isLocal
totrue
in thestacks/dev/config.tm.hcl
file.
We might recommend using a sandbox or trial account (ie. A Cloud Guru Playground) when initially using the project.
- This protects users from accidently causing any risk/issues with their existing environments/configurations.
- Using a sandbox account can also prevent any naming collisions during provisioning with their existing resources.
There are a lot of opportunities for optimizing the config for this project. (This was intentional!)
- This project was intended for testing purposes of sample Infra Code, which is used to illustrate how you might structure your project.
Those running an ARM CPU architecture (ie. Apple's M1) might find it challenging when trying to use the project.
- This is due to lack of current support of compiled binaries for ARM and lack of native emulation (Rosetta 2 expected as part of OSX 13 Ventura).
- Method 1: Running from your local system (tested on OSX 10.15 Catalina)
- Method 2: Running within a custom Docker image
Required for Method 1
- git (v2.x)
- jq (any version)
- make (any version)
- aws-cli (v2.7)
- terramate (v0.1.35+)
- terraform (v1.2.9+)
- kubectl (v1.19+)
Required for Method 2
- docker [v20.10+]
export AWS_DEFAULT_REGION='us-west-2'
export AWS_ACCESS_KEY_ID='<PASTE_YOUR_ACCESS_KEY_ID_HERE>'
export AWS_SECRET_ACCESS_KEY='<PASTE_YOUR_SECRET_ACCESS_KEY_HERE>'
# Terramate Generate
terramate generate
git add -A
# Terraform Provisioning
cd stacks/local
terramate run -- terraform init
terramate run -- terraform apply
# Adds the EKS Cluster Configure/Creds (Change cluster name if necessary!)
aws eks update-kubeconfig --name ex-eks
# Edit Kube Config to Connect to cluster (Add to the bottom of the "Users" section of the config...)
cat <<EOT >> ~/.kube/config
env:
- name: AWS_ACCESS_KEY_ID
value: ${AWS_ACCESS_KEY_ID}
- name: AWS_SECRET_ACCESS_KEY
value: ${AWS_SECRET_ACCESS_KEY}
EOT
make build && make start
make exec
# Source Script Functions
source functions.sh
# Example: Changing Directory into the "Local" Stack
cd /project/stacks/local
# Terramate Commands (Generate/Validate/Apply)
tm-apply
eks-creds
pluto detect-helm -o wide -t k8s=v1.25.0
pluto detect-api-resources -o wide -t k8s=v1.25.0
kubectl scale deployment inflate --replicas 2
kubectl scale deployment inflate --replicas 0
# Returns the PVC ID from the Persistent Volune
PVC_ID=$(kubectl -n kubecost get pv -o json | jq -r '.items[1].metadata.name')
# Note: If the following command doesn't return a value for VOLUME_ID it's likely the volume is already managed by the new
# EBS CSI, which is the new default gp3 StorageClass. If this occurs please use this "alternate" command to continue exercise.
# Use this for gp2 volume types
VOLUME_ID=$(kubectl get pv $PVC_ID -o jsonpath='{.spec.awsElasticBlockStore.volumeID}' | rev | cut -d'/' -f 1 | rev)
# Alternate command for use with gp3 volume types
VOLUME_ID=$(kubectl get pv $PVC_ID -o jsonpath='{.spec.csi.volumeHandle}' | rev | cut -d'/' -f 1 | rev)
# Creates the Snapshot from the Volume / Persistent Volume
SNAPSHOT_RESPONSE=$(aws ec2 create-snapshot --volume-id $VOLUME_ID --tag-specifications 'ResourceType=snapshot,Tags=[{Key="ec2:ResourceTag/ebs.csi.aws.com/cluster",Value="true"}]')
aws ec2 describe-snapshots --snapshot-ids $(echo "${SNAPSHOT_RESPONSE}" | jq -r '.SnapshotId')
cat <<EOF | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotContent
metadata:
name: imported-aws-snapshot-content # <-- Make sure to use a unique name here
spec:
volumeSnapshotRef:
kind: VolumeSnapshot
name: imported-aws-snapshot
namespace: kubecost
source:
snapshotHandle: $(echo "${SNAPSHOT_RESPONSE}" | jq -r '.SnapshotId')
driver: ebs.csi.aws.com
deletionPolicy: Delete
volumeSnapshotClassName: ebs-csi-aws
EOF
cat <<EOF | kubectl apply -f -
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: imported-aws-snapshot
namespace: kubecost
spec:
volumeSnapshotClassName: ebs-csi-aws
source:
volumeSnapshotContentName: imported-aws-snapshot-content # <-- Here is the reference to the Snapshot by name
EOF
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: imported-aws-snapshot-pvc
namespace: kubecost
spec:
accessModes:
- ReadWriteOnce
storageClassName: gp3
resources:
requests:
storage: 32Gi
dataSource:
name: imported-aws-snapshot
kind: VolumeSnapshot
apiGroup: snapshot.storage.k8s.io
EOF
kubectl -n kubecost patch deployment kubecost-cost-analyzer --patch '{"spec": {"template": {"spec": {"volumes": [{"name": "persistent-configs", "persistentVolumeClaim": { "claimName": "imported-aws-snapshot-pvc"}}]}}}}'
# Set Pluralith Credentials
export INFRACOST_API_KEY="<INFRACOST_API_KEY_HERE>"
export INFRACOST_ENABLE_DASHBOARD=true
# Generated Cost Usage Report
terramate run -- infracost breakdown --path . --usage-file ./infracost-usage.yml --sync-usage-file
# Set Pluralith Credentials
export PATH=$PATH:/root/.linuxbrew/Cellar/infracost/0.10.13/bin
export PLURALITH_API_KEY="<PLURALITH_API_KEY_HERE>"
export PLURALITH_PROJECT_ID="<PLURALITH_PROJECT_ID_HERE>"
# Run Pluralith Init & Plan
terramate run -- pluralith init --api-key $PLURALITH_API_KEY --project-id $PLURALITH_PROJECT_ID
terramate run -- pluralith run plan --title "Stack" --show-changes=false --show-costs=true ----cost-usage-file=infracost-usage.yml
terramate run --reverse -- terraform destroy