Sockshop demo with Istio service mesh
- Install Istio-1.0.4 using helm charts.
- Deploy sock-shop application.
kubectl apply -f 1-sock-shop-install/1-sock-shop-complete-demo-istio.yaml -nsock-shop
istioctl create -f 1-sock-shop-install/2-sockshop-gateway.yaml -nsock-shop
istioctl create -f 1-sock-shop-install/3-virtual-services-all.yaml -nsock-shop
Bellow changes are made to sock-shop K8S deployment spec to work with Istio:
- All service ports are named
http-<service-name>
as per Istio requirement https://istio.io/docs/setup/kubernetes/spec-requirements/ - Added
epmd
port to rabbitmq service. Required for rabbitmq to function properly. - Run bellow command due for
catalogue
service to be able to connect tocatalogue-db
. More info : istio/istio#10062
kubectl delete meshpolicies.authentication.istio.io default
- Added
version: v1
labels to all deployments. (Required for Istio destination rules to work properly.)
- Apply version 2 of fron-end.
kubectl apply -f 2-inteligent-routing/2-front-end-deployment-v2-istio.yaml -nsock-shop
- Update
front-end
istio VirtualService to route traffic tofront-end-v2
.
istioctl replace -f 2-inteligent-routing/2-front-end-deployment-v2-route.yaml -nsock-shop
- Apply weighted routing policy (90% traffic to old v1 fron-end and 10% traffic to new v2 front-end)
istioctl replace -f 2-inteligent-routing/2-canary.yaml
- Run Fortio app with 3 connections and 20 requests. See all requests go through
kubectl apply -f 3-circuit-breaker/3-fortio.yaml
FORTIO_POD=$(kubectl get pod -nsock-shop | grep fortio | awk '{ print $1 }')
kubectl exec -it $FORTIO_POD -nsock-shop -c fortio /usr/local/bin/fortio -- load -curl http://front-end:80/index.html
kubectl exec -it $FORTIO_POD -nsock-shop -c fortio /usr/local/bin/fortio -- load -c 3 -qps 0 -n 20 -loglevel Warning http://front-end:80/index.html
- Apply circuit breaker destination rule for max 1 connection.
kubectl apply -f 3-circuit-breaker/3-circuit-breaker.yaml
- Run Fortio app with 3 connections and 20 requests. 30% should pass and 70% should fail.
kubectl exec -it $FORTIO_POD -nsock-shop -c fortio /usr/local/bin/fortio -- load -c 3 -qps 0 -n 20 -loglevel Warning http://front-end:80/index.html
- Update destination rule for max 2 concurrent connections.
- Run Fortio app with 3 connections and 20 requests. 70% should pass and 30% should fail.
kubectl exec -it $FORTIO_POD -nsock-shop -c fortio /usr/local/bin/fortio -- load -c 3 -qps 0 -n 20 -loglevel Warning http://front-end:80/index.html
- Apply mesh-wide authentication policy in
default
namespace. This will enable all the receiving (server) sides of the service to use TLS.
istioctl create -f 5-global-mtls-mesh-policy.yaml
- Load the front-end in browser. See that catalogues are not loading since
catalogue
service is rejecting plain-textfront-end
connections. - Update all the destination-rules to use TLS. This will enable all the sender (client) sides of the services to use TLS.
istioctl replace -f 4-security/4-global-mtls-mesh-policy.yaml
- Load the front-end again. See that its fuctioning properly now.
- Verify that certs are automatically injected into sidecar proxies
kubectl exec -nsock-shop -c istio-proxy carts-66469c84c6-jj2zt -- ls /etc/certs
cert-chain.pem <-- cert to be presented to other side
key.pem <-- side cars private key
root-cert.pem <-- root cert to verify peer's cert
- Verify using istioctl
istioctl authn tls-check carts.sock-shop.svc.cluster.local -nsock-shop
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
carts.sock-shop.svc.cluster.local:80 OK mTLS mTLS default/ carts/sock-shop
^Default mesh ^Destination rule
policy since
namespace is blanck
- Inject delay of 30s to all responses to
catalogue
service.
istioctl create -f 5-timeouts/5-fault-injection-delay-catalogue.yaml -nsock-shop
- Re-fresh
front-end
in browser. See catalogues getting loaded after 30 sec.
- Inject connection aborts to all responses to
catalogue
service.
istioctl replace -f 5-timeouts/5-fault-injection-abort-catalogue.yaml -nsock-shop
- Re-fresh
front-end
in browser. No catalogues are getting loaded.
- Connect to prometheus
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 &
- Access Prometheus dashboard
http://localhost:9090/graph
- Query for total requests to
catalogue
service
istio_requests_total{destination_service="catalogue.sock-shop.svc.cluster.local"}
rate(istio_requests_total{destination_service=~"catalogue.*", response_code="200"}[5m]) <-- HTTP Success Rate to catalgue serivce for last 5 mins
- Connect to Grafana
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
- Access Grafana dashboard
http://localhost:3000/dashboard/db/istio-mesh-dashboard
- Connect to Jaeger
kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=jaeger -o jsonpath='{.items[0].metadata.name}') 16686:16686 &
- Access the dashboard
http://localhost:16686
- Connect to Kiali.
kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=kiali -o jsonpath='{.items[0].metadata.name}') 20001:20001 &
- Access the dashboard (Default username/password: admin/admin)
http://localhost:20001/