This repository is the reference code for our in-depth course Try Knative along with the related article on our blog.
There are a few requirements before you get started.
- Watch Terraforming Kubernetes on Linode or have experience with Kubernetes clusters
- Git installed
- Terraform installed
- Kubectl installed
If you have all of these complete, let's do this.
I have modified the Terraforming Kubernetes Rapid-Fire repo for this project. I removed the git
history and k8s.yaml
file.
git clone https://github.com/codingforentrepreneurs/try-knative
cd try-knative
Once you clone this repo, you'll have a installers/install-knative.sh
file that will install Knative Serving and Istio on your Kubernetes cluster based on their standard installation instructions. More on this in a bit.
If you want to start fresh, checkout the fresh
branch and remove the git history
git checkout fresh
rm -rf .git
git init
Now you have a fresh repo that includes only the code for the rest of this README.
Create an account on Linode and get an API Key in your linode account here.
Once you have a key, do the following:
echo "linode_api_token=\"YOUR_API_KEY\"" >> terraform.tfvars
echo "k8s_node_type=\"g6-standard-2\"" >> terraform.tfstate
Through my tests, the minimum node instance type we need to use for Knative is g6-standard-2
with a minimum of 3 nodes in the cluster.
Now run
terraform init
Be sure that if you're using git that you have at least this .gitignore file in your repo.
In infra.tf
, we'll update the linode_lke_cluster.terraform_k8s
resource to add autoscaling to our cluster and update the k8s_version
to 1.25
.
resource "linode_lke_cluster" "terraform_k8s" {
k8s_version="1.25"
label="try-knative"
region="us-east"
tags=["try-knative"]
pool {
type = var.k8s_node_type
count = 3
autoscaler {
min = 3
max = 8
}
}
}
The autoscaler
declaration will ensure that Kubernetes has as many nodes as it needs to run your containerized applications. LKE will manage this for you without further intervention. The autoscaler has strange behavior in Terraform so we'll update it in a bit.
Optional: I also updated the label
, and tags
to be try-knative
instead of terraform-k8s
for this project. You should update them as you see fit. I left the name of Terraform resource linode_lke_cluster
as terraform_k8s
in order to limit how much of infra.tf
we need to change.
terraform apply
Use
terraform apply -auto-approve
if you're really in a hurry.
Adding the autoscaling declaration in our LKE resource is a bit wonky because Terraform will constantly try to update your cluster to the declare document state.
For example, if LKE autoscaled your cluster to 5 nodes, Terraform will try to update your cluster to 3 nodes. To fix this, we can just ignore changes to the pool
declaration until we want to explicitly change it.
resource "linode_lke_cluster" "terraform_k8s" {
k8s_version="1.24"
label="terraform-k8s"
region="us-east"
tags=["terraform-k8s"]
pool {
type = var.k8s_node_type
count = 3
autoscaler {
min = 3
max = 8
}
}
lifecycle {
ignore_changes = [
pool,
]
create_before_destroy = true
}
}
Knative Serving is the core Kubernetes component that allows you to run serverless containerized applications. Istio is a service mesh that allows you to manage traffic between your applications and, for our case, with the outside world.
You can install them on these links or our bash script below:
- Install Knative Serving Docs
- Install Knative + Istio Installation Docs
The macOS/Linux bash script is avaiable in this repo at: installers/install-knative-istio.sh. Let's run it with now:
chmod +x ./installers/install-knative-istio.sh
./installers/install-knative-istio.sh
At this time, it's recommended that Windows users use the links for Knative Serving and Knative + Istio installation from above.
When you run ./installers/install-knative-istio.sh
, you should see the output related to your knative/istio installation. Here's the command we can run again to get our Knative Ingress IP address. If you're using Linode and LKE, this IP Address is configured from a Linode load balancing service called NodeBalancers which is how we can have Kubernetes provide us another IP Address.
kubectl --namespace istio-system get service istio-ingressgateway
export KNATIVE_INGRESS_IP=$(kubectl --namespace istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "Your IP Address is: $KNATIVE_INGRESS_IP"
echo "Add a cname record for your domain using the above IP address."
At this point, I recommend you review our Try Knative article on deploying containers with Knative services, mapping domains, and more!