This example demonstrates how to leverage Istio's identity and access control policies to help secure microservices running on GKE.
We'll use the Hipstershop sample application to cover:
- Incrementally adopting Istio strict mutual TLS authentication across the service mesh
- Enabling JWT authentication for the frontend service
- Using an Istio authorization policy to secure access to the frontend service
Google Cloud Shell is a browser-based terminal that Google provides to interact with your GCP resources. It is backed by a free Compute Engine instance that comes with many useful tools already installed, including everything required to run this demo.
Click the button below to open the demo instructions in your Cloud Shell:
- Change into the demo directory.
cd security-intro
- From Cloud Shell, enable the Kubernetes Engine API.
gcloud services enable container.googleapis.com
- Create a GKE cluster.
gcloud beta container clusters create istio-security-demo \
--zone=us-central1-f \
--machine-type=n1-standard-2 \
--num-nodes=4
- Install Istio on the cluster.
cd common/
./install_istio.sh
- Wait for all Istio pods to be
Running
orCompleted
.
kubectl get pods -n istio-system
We will use the Hipstershop sample application for this demo.
- Apply the sample app manifests
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/kubernetes-manifests.yaml
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/istio-manifests.yaml
- Run
kubectl get pods -n default
to ensure that all pods areRunning
andReady
.
NAME READY STATUS RESTARTS AGE
adservice-76b5c7bd6b-zsqb8 2/2 Running 0 1m
checkoutservice-86f5c7679c-8ghs8 2/2 Running 0 1m
currencyservice-5749fd7c6d-lv6hj 2/2 Running 0 1m
emailservice-6674bf75c5-qtnd8 2/2 Running 0 1m
frontend-56fdfb866c-tvdm6 2/2 Running 0 1m
loadgenerator-b64fcb8bc-m6nd2 2/2 Running 0 1m
paymentservice-67c6696c54-tgnc5 2/2 Running 0 1m
productcatalogservice-76c6454c57-9zj2v 2/2 Running 0 1m
recommendationservice-78c7676bfb-xqtp6 2/2 Running 0 1m
shippingservice-7bc4bc75bb-kzfrb 2/2 Running 0 1m
🔎 Each pod has 2 containers, because each pod now has the injected Istio sidecar proxy.
Now we're ready to enforce security policies for this application.
Authentication refers to identity: who is this service? who is this end-user? and can I trust that they are who they say they are?
One benefit of using Istio that it provides uniformity for both service-to-service and end user-to-service authentication. Istio abstracts away authentication from your application code, by tunneling all service-to-service communication through the Envoy sidecar proxies. And by using a centralized Public-Key Infrastructure, Istio provides consistency to make sure authentication is set up properly across your mesh. Further, Istio allows you to adopt mTLS on a per-service basis, or easily toggle end-to-end encryption for your entire mesh. Let's see how.
Starting in Istio 1.5, the default Istio mTLS behavior is "auto." This means that pod-to-pod traffic will use mutual TLS by default, but pods will still accept plain-text traffic - for instance, from pods in a different namespace that are not injected with the Istio proxy.
Because we deployed the entire sample app into one namespace (default
) and all pods have the Istio sidecar proxy, traffic will be mTLS for all the sample app workloads. Let's look at this behavior.
- Open the Kiali service graph in a web browser.
istioctl dashboard kiali &
- In the left sidecar, click Graph > Namespace:
default
. Under "display," click thesecurity
view. You should see a lock icon on the edges in the graph, indicating that traffic is encrypted/mTLS.
From this default "permissive" mTLS behavior, we can enforce "strict" mTLS for a workload, namespace, or for the entire mesh. This means that only mTLS traffic will be accepted by the target workload(s).
Let's enforce strict mTLS for the frontend workload. We'll use an Istio PeerAuthentication
resource to do this.
- To start, see what happens by default when you try to curl the frontend service with plain HTTP, from another pod in the same namespace. Your request should succeed with status
200
, because by default, both TLS and plain text traffic is accepted.
$ kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl http://frontend:80/ -o /dev/null -s -w '%{http_code}\n'
200
- Open the mTLS policy in
./manifests/mtls-frontend.yaml
. Notice how the authentication policy uses labels and selectors to target the specificfrontend
deployment in thedefault
namespace.
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "frontend"
namespace: "default"
spec:
selector:
matchLabels:
app: frontend
mtls:
mode: STRICT
- Apply the policy.
kubectl apply -f ./manifests/mtls-frontend.yaml
- Try to reach the frontend again, with a plain HTTP request from the istio-proxy container in productcatalog.
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl http://frontend:80/ -o /dev/null -s -w '%{http_code}\n'
You should see:
000
command terminated with exit code 56
Exit code 56
means "failure to
receive network data." This is expected because now, the frontend expects TLS certificates on every request.
Now that we've adopted mTLS for one service, let's enforce mTLS for the entire
default
namespace.
- Open
manifests/mtls-default-ns.yaml
. Notice that we're using the same resource type (PeerAuthentication
) as we used for the workload-specific policy. The difference is that we omitselectors
for a specific service, and only specify thenamespace
on which we want to enforce mTLS.
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
namespace: "default"
spec:
mtls:
mode: STRICT
- Apply the resource:
kubectl apply -f ./manifests/mtls-default-ns.yaml
- Clean up by deleting the policies created in this section.
kubectl delete -f ./manifests/mtls-frontend.yaml
kubectl delete -f ./manifests/mtls-default-ns.yaml
Now that we've enabled service-to-service authentication in the default namespace, let's
enforce end-user ("origin") authentication for the frontend
service, using JSON Web Tokens
(JWT).
First, we'll create an Istio policy to enforce JWT authentication for inbound requests
to the frontend
service.
- Open the policy in
./manifests/jwt-frontend-request.yaml
. The Istio policy we'll use is called aRequestAuthentication
resource.
apiVersion: security.istio.io/v1beta1
kind: RequestAuthentication
metadata:
name: frontend
namespace: default
spec:
selector:
matchLabels:
app: frontend
jwtRules:
- issuer: "testing@secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.5/security/tools/jwt/samples/jwks.json"
🔎 This policy uses Istio's
test JSON Web Key Set (jwksUri
), the public key used to validate incoming JWTs.
- Apply the
RequestAuthentication
resource.
kubectl apply -f ./manifests/jwt-frontend-request.yaml
- Set a local
TOKEN
variable. We'll use this token on the client-side to make requests to the frontend.
TOKEN=$(curl -k https://raw.githubusercontent.com/istio/istio/release-1.4/security/tools/jwt/samples/demo.jwt -s); echo $TOKEN
- Curl the frontend with a valid JWT.
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy \
-- curl http://frontend:80/ -o /dev/null --header "Authorization: Bearer $TOKEN" -s -w '%{http_code}\n'
- Now, try to reach the frontend without a JWT.
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy \
-- curl http://frontend:80/ -o /dev/null -s -w '%{http_code}\n'
200
You should see a 200
code. Why is this? Because starting in Istio 1.5, the Istio RequestAuthentication
(JWT) policy is only responsible for validating tokens. If we pass an invalid token, we should see a "401: Unauthorized" response:
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy \
-- curl http://frontend:80/ -o /dev/null --header "Authorization: Bearer helloworld" -s -w '%{http_code}\n'
401
But if we pass no token at all, the RequestAuthentication
policy is not invoked. Therefore, in addition to this authentication policy, we need an authorization policy that requires a JWT on all requests.
- View the
AuthorizationPolicy
resource - openmanifests/jwt-frontend-authz.yaml
. This policy declares that all requests to thefrontend
workload must have a JWT.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: require-jwt
namespace: default
spec:
selector:
matchLabels:
app: frontend
action: ALLOW
rules:
- from:
- source:
requestPrincipals: ["testing@secure.istio.io/testing@secure.istio.io"]
- Apply the
AuthorizationPolicy
.
kubectl apply -f manifests/jwt-frontend-authz.yaml
- Curl the frontend again, without a JWT. You should now see
403 - Forbidden
. This is theAuthorizationPolicy
taking effect-- that all frontend requests must have a JWT.
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy \
-- curl http://frontend:80/ -o /dev/null -s -w '%{http_code}\n'
✅ You should see a 200
response code.
🎉 Well done! You just secured the frontend
service with a JWT policy and an authorization policy.
- Clean up:
kubectl delete -f manifests/jwt-frontend-authz.yaml
kubectl delete -f manifests/jwt-frontend-request.yaml
We just saw a preview of how to enforce access control using Istio AuthorizationPolicies
. Let's go deeper into how these policies work.
Unlike authentication, which refers to the "who," authorization refers to the "what", or: what is this service or user allowed to do?
By default, requests between Istio services (and between end-users and services) are allowed by default. You can then enforce authorization for one or many services using an AuthorizationPolicy
custom resource.
Let's put this into action, by only allowing requests to the frontend
that have a specific HTTP header (hello
:world
):
apiVersion: "security.istio.io/v1beta1"
kind: "AuthorizationPolicy"
metadata:
name: "frontend"
namespace: default
spec:
selector:
matchLabels:
app: frontend
rules:
- when:
- key: request.headers[hello]
values: ["world"]
- Apply the AuthorizationPolicy for the frontend service:
kubectl apply -f ./manifests/authz-frontend.yaml
- Curl the frontend without the
hello
header. You should see a403: Forbidden
response.
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy \
-- curl http://frontend:80/ -o /dev/null -s -w '%{http_code}\n'
403
- Curl the frontend with the
hello
:world
header. You should now see a200
response code.
kubectl exec $(kubectl get pod -l app=productcatalogservice -o jsonpath={.items..metadata.name}) -c istio-proxy \
-- curl --header "hello:world" http://frontend:80 -o /dev/null -s -w '%{http_code}\n'
200
✅ You just configured a fine-grained Istio access control policy for one service. We hope this section demonstrated how Istio can support specific, service-level authorization policies using a set of familiar, Kubernetes-based resources.
To avoid incurring additional costs, delete the GKE cluster created in this demo:
gcloud container clusters delete istio-security-demo --zone=us-central1-f
Or, to keep your GKE cluster with Istio and Hipstershop still installed, delete the Istio security resources only:
kubectl delete -f ./manifests
If you're interested in learning more about Istio's security features, read more at: