days to Istio 1.5

Mutual TLS Deep-Dive

Through this task, you can have closer look at mutual TLS and learn its settings. This task assumes:

  • You have completed the authentication policy task.
  • You are familiar with using authentication policy to enable mutual TLS.
  • Istio runs on Kubernetes with global mutual TLS enabled. You can follow our instructions to install Istio. If you already have Istio installed, you can add or modify authentication policies and destination rules to enable mutual TLS as described in this task.
  • You have deployed the httpbin and sleep with Envoy sidecar in the default namespace. For example, below is the command to deploy those services with manual sidecar injection:

    ZipZip
    $ kubectl apply -f <(istioctl kube-inject -f @samples/httpbin/httpbin.yaml@)
    $ kubectl apply -f <(istioctl kube-inject -f @samples/sleep/sleep.yaml@)
    

Verify Citadel runs properly

Citadel is Istio’s key management service. Citadel must run properly for mutual TLS to work correctly. Verify the cluster-level Citadel runs properly with the following command:

$ kubectl get deploy -l istio=citadel -n istio-system
NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
istio-citadel   1         1         1            1           1m

Citadel is up if the “AVAILABLE” column is 1.

Verify keys and certificates installation

Istio automatically installs necessary keys and certificates for mutual TLS authentication in all sidecar containers. Run command below to confirm key and certificate files exist under /etc/certs:

$ kubectl exec $(kubectl get pod -l app=httpbin -o jsonpath={.items..metadata.name}) -c istio-proxy -- ls /etc/certs
cert-chain.pem
key.pem
root-cert.pem

Use the openssl tool to check if certificate is valid (current time should be in between Not Before and Not After)

$ kubectl exec $(kubectl get pod -l app=httpbin -o jsonpath={.items..metadata.name}) -c istio-proxy -- cat /etc/certs/cert-chain.pem | openssl x509 -text -noout  | grep Validity -A 2
Validity
        Not Before: May 17 23:02:11 2018 GMT
        Not After : Aug 15 23:02:11 2018 GMT

You can also check the identity of the client certificate:

$ kubectl exec $(kubectl get pod -l app=httpbin -o jsonpath={.items..metadata.name}) -c istio-proxy -- cat /etc/certs/cert-chain.pem | openssl x509 -text -noout  | grep 'Subject Alternative Name' -A 1
        X509v3 Subject Alternative Name:
            URI:spiffe://cluster.local/ns/default/sa/default

Please check Istio identity for more information about service identity in Istio.

Verify mutual TLS configuration

Use istioctl authn tls-check to check if the mutual TLS settings are in effect. The istioctl command needs the client’s pod because the destination rule depends on the client’s namespace. You can also provide the destination service to filter the status to that service only.

The following commands identify the authentication policy for the httpbin.default.svc.cluster.local service and identify the destination rules for the service as seen from the same pod of the sleep app:

$ SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
$ istioctl authn tls-check ${SLEEP_POD} httpbin.default.svc.cluster.local

In the following example output you can see that:

  • Mutual TLS is consistently setup for httpbin.default.svc.cluster.local on port 8000.
  • Istio uses the mesh-wide default authentication policy.
  • Istio has the default destination rule in the istio-system namespace.
HOST:PORT                                  STATUS     SERVER     CLIENT           AUTHN POLICY     DESTINATION RULE
httpbin.default.svc.cluster.local:8000     OK         STRICT     ISTIO_MUTUAL     /default         istio-system/default

The output shows:

  • STATUS: whether the TLS settings are consistent between the server, the httpbin service in this case, and the client or clients making calls to httpbin.

  • SERVER: the mode used on the server.

  • CLIENT: the mode used on the client or clients.

  • AUTHN POLICY: the namespace and name of the authentication policy. If the policy is the mesh-wide policy, namespace is blank, as in this case: /default

  • DESTINATION RULE: the namespace and name of the destination rule used.

To illustrate the case when there are conflicts, add a service-specific destination rule for httpbin with incorrect TLS mode:

$ cat <<EOF | kubectl apply -f -
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
  name: "bad-rule"
  namespace: "default"
spec:
  host: "httpbin.default.svc.cluster.local"
  trafficPolicy:
    tls:
      mode: DISABLE
EOF

Run the same istioctl command as above, you now see the status is CONFLICT, as client is in HTTP mode while server is in mTLS.

$ istioctl authn tls-check ${SLEEP_POD} httpbin.default.svc.cluster.local
HOST:PORT                                  STATUS       SERVER     CLIENT     AUTHN POLICY        DESTINATION RULE
httpbin.default.svc.cluster.local:8000     CONFLICT     mTLS       HTTP       /default            default/bad-rule

You can also confirm that requests from sleep to httpbin are now failing:

$ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c sleep -- curl httpbin:8000/headers -o /dev/null -s -w '%{http_code}\n'
503

Before you continue, remove the bad destination rule to make mutual TLS work again with the following command:

$ kubectl delete destinationrule --ignore-not-found=true bad-rule

Verify requests

This task shows how a server with mutual TLS enabled responses to requests that are:

  • In plain-text
  • With TLS but without client certificate
  • With TLS with a client certificate

To perform this task, you want to by-pass client proxy. A simplest way to do so is to issue request from istio-proxy container.

  1. Confirm that plain-text requests fail as TLS is required to talk to httpbin with the following command:

    $ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl http://httpbin:8000/headers -o /dev/null -s -w '%{http_code}\n'
    000
    command terminated with exit code 56
    
  2. Confirm TLS requests without client certificate also fail:

    $ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://httpbin:8000/headers -o /dev/null -s -w '%{http_code}\n' -k
    000
    command terminated with exit code 35
    
  3. Confirm TLS request with client certificate succeed:

    $ kubectl exec $(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://httpbin:8000/headers -o /dev/null -s -w '%{http_code}\n' --key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem -k
    200
    

Cleanup

ZipZip
$ kubectl delete --ignore-not-found=true -f @samples/httpbin/httpbin.yaml@
$ kubectl delete --ignore-not-found=true -f @samples/sleep/sleep.yaml@
Was this information useful?
Do you have any suggestions for improvement?

Thanks for your feedback!