Install Istio with an External Control Plane

This guide walks you through the process of installing an external control plane and then connecting one or more remote clusters to it. The external control plane deployment model allows a mesh operator to install and manage a control plane on an external cluster, separate from the data plane cluster (or multiple clusters) comprising the mesh. This deployment model allows a clear separation between mesh operators and mesh administrators. Mesh operators install and manage Istio control planes while mesh admins only need to configure the mesh.

External control plane cluster and remote cluster
External control plane cluster and remote cluster

Envoy proxies (sidecars and gateways) running in the remote cluster access the external istiod via an ingress gateway which exposes the endpoints needed for discovery, CA, injection, and validation.

While configuration and management of the external control plane is done by the mesh operator in the external cluster, the first remote cluster connected to an external control plane serves as the config cluster for the mesh itself. The mesh administrator will use the config cluster to configure the mesh resources (gateways, virtual services, etc.) in addition to the mesh services themselves. The external control plane will remotely access this configuration from the Kubernetes API server, as shown in the above diagram.

Before you begin

Clusters

This guide requires that you have two Kubernetes clusters with any of the supported Kubernetes versions: 1.19, 1.20, 1.21, 1.22.

The first cluster will host the external control plane installed in the external-istiod namespace. An ingress gateway is also installed in the istio-system namespace to provide cross-cluster access to the external control plane.

The second cluster is a remote cluster that will run the mesh application workloads. Its Kubernetes API server also provides the mesh configuration used by the external control plane (istiod) to configure the workload proxies.

API server access

The Kubernetes API server in the remote cluster must be accessible to the external control plane cluster. Many cloud providers make API servers publicly accessible via network load balancers (NLBs). If the API server is not directly accessible, you will need to modify the installation procedure to enable access. For example, the east-west gateway used in a multicluster configuration could also be used to enable access to the API server.

Environment Variables

The following environment variables will be used throughout to simplify the instructions:

VariableDescription
CTX_EXTERNAL_CLUSTERThe context name in the default Kubernetes configuration file used for accessing the external control plane cluster.
CTX_REMOTE_CLUSTERThe context name in the default Kubernetes configuration file used for accessing the remote cluster.
REMOTE_CLUSTER_NAMEThe name of the remote cluster.
EXTERNAL_ISTIOD_ADDRThe hostname for the ingress gateway on the external control plane cluster. This is used by the remote cluster to access the external control plane.
SSL_SECRET_NAMEThe name of the secret that holds the TLS certs for the ingress gateway on the external control plane cluster.

Set the CTX_EXTERNAL_CLUSTER, CTX_REMOTE_CLUSTER, and REMOTE_CLUSTER_NAME now. You will set the others later.

$ export CTX_EXTERNAL_CLUSTER=<your external cluster context>
$ export CTX_REMOTE_CLUSTER=<your remote cluster context>
$ export REMOTE_CLUSTER_NAME=<your remote cluster name>

Cluster configuration

Mesh operator steps

A mesh operator is responsible for installing and managing the external Istio control plane on the external cluster. This includes configuring an ingress gateway on the external cluster, which allows the remote cluster to access the control plane, and installing the sidecar injector webhook configuration on the remote cluster so that it will use the external control plane.

Set up a gateway in the external cluster

  1. Create the Istio install configuration for the ingress gateway that will expose the external control plane ports to other clusters:

    $ cat <<EOF > controlplane-gateway.yaml
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    metadata:
      namespace: istio-system
    spec:
      components:
        ingressGateways:
          - name: istio-ingressgateway
            enabled: true
            k8s:
              service:
                ports:
                  - port: 15021
                    targetPort: 15021
                    name: status-port
                  - port: 15012
                    targetPort: 15012
                    name: tls-xds
                  - port: 15017
                    targetPort: 15017
                    name: tls-webhook
    EOF
    

    Then, install the gateway in the istio-system namespace of the external cluster:

    $ istioctl install -f controlplane-gateway.yaml --context="${CTX_EXTERNAL_CLUSTER}"
    
  2. Run the following command to confirm that the ingress gateway is up and running:

    $ kubectl get po -n istio-system --context="${CTX_EXTERNAL_CLUSTER}"
    NAME                                   READY   STATUS    RESTARTS   AGE
    istio-ingressgateway-9d4c7f5c7-7qpzz   1/1     Running   0          29s
    istiod-68488cd797-mq8dn                1/1     Running   0          38s
    

    You will notice an istiod deployment is also created in the istio-system namespace. This is used to configure the ingress gateway and is NOT the control plane used by remote clusters.

  3. Configure your environment to expose the Istio ingress gateway service using a public hostname with TLS. Set the EXTERNAL_ISTIOD_ADDR environment variable to the hostname and SSL_SECRET_NAME environment variable to the secret that holds the TLS certs:

    $ export EXTERNAL_ISTIOD_ADDR=<your external istiod host>
    $ export SSL_SECRET_NAME=<your external istiod secret>
    

Set up the remote config cluster

  1. Use the external profile to configure the remote cluster’s Istio installation. This installs an injection webhook that uses the external control plane’s injector, instead of a locally deployed one. Because this cluster will also serve as the config cluster, the Istio CRDs and other resources that will be needed on the remote cluster are also installed by setting global.configCluster and pilot.configMap to true:

    $ cat <<EOF > remote-config-cluster.yaml
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    metadata:
      namespace: external-istiod
    spec:
      profile: external
      values:
        global:
          istioNamespace: external-istiod
          configCluster: true
        pilot:
          configMap: true
        istiodRemote:
          injectionURL: https://${EXTERNAL_ISTIOD_ADDR}:15017/inject/:ENV:cluster=${REMOTE_CLUSTER_NAME}:ENV:net=network1
        base:
          validationURL: https://${EXTERNAL_ISTIOD_ADDR}:15017/validate
    EOF
    

    Then, install the configuration on the remote cluster:

    $ kubectl create namespace external-istiod --context="${CTX_REMOTE_CLUSTER}"
    $ istioctl manifest generate -f remote-config-cluster.yaml | kubectl apply --context="${CTX_REMOTE_CLUSTER}" -f -
    
  2. Confirm that the remote cluster’s webhook configuration has been installed:

    $ kubectl get mutatingwebhookconfiguration --context="${CTX_REMOTE_CLUSTER}"
    NAME                                     WEBHOOKS   AGE
    istio-sidecar-injector-external-istiod   4          6m24s
    

Set up the control plane in the external cluster

  1. Create the external-istiod namespace, which will be used to host the external control plane:

    $ kubectl create namespace external-istiod --context="${CTX_EXTERNAL_CLUSTER}"
    
  2. The control plane in the external cluster needs access to the remote cluster to discover services, endpoints, and pod attributes. Create a secret with credentials to access the remote cluster’s kube-apiserver and install it in the external cluster:

    $ kubectl create sa istiod-service-account -n external-istiod --context="${CTX_EXTERNAL_CLUSTER}"
    $ istioctl x create-remote-secret \
      --context="${CTX_REMOTE_CLUSTER}" \
      --type=config \
      --namespace=external-istiod \
      --service-account=istiod \
      --create-service-account=false | \
      kubectl apply -f - --context="${CTX_EXTERNAL_CLUSTER}"
    
  3. Create the Istio configuration to install the control plane in the external-istiod namespace of the external cluster. Notice that istiod is configured to use the locally mounted istio configmap and the SHARED_MESH_CONFIG environment variable is set to istio. This instructs istiod to merge the values set by the mesh admin in the config cluster’s configmap with the values in the local configmap set by the mesh operator, here, which will take precedence if there are any conflicts:

    $ cat <<EOF > external-istiod.yaml
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    metadata:
      namespace: external-istiod
    spec:
      profile: empty
      meshConfig:
        rootNamespace: external-istiod
        defaultConfig:
          discoveryAddress: $EXTERNAL_ISTIOD_ADDR:15012
          proxyMetadata:
            XDS_ROOT_CA: /etc/ssl/certs/ca-certificates.crt
            CA_ROOT_CA: /etc/ssl/certs/ca-certificates.crt
      components:
        pilot:
          enabled: true
          k8s:
            overlays:
            - kind: Deployment
              name: istiod
              patches:
              - path: spec.template.spec.volumes[100]
                value: |-
                  name: config-volume
                  configMap:
                    name: istio
              - path: spec.template.spec.volumes[100]
                value: |-
                  name: inject-volume
                  configMap:
                    name: istio-sidecar-injector
              - path: spec.template.spec.containers[0].volumeMounts[100]
                value: |-
                  name: config-volume
                  mountPath: /etc/istio/config
              - path: spec.template.spec.containers[0].volumeMounts[100]
                value: |-
                  name: inject-volume
                  mountPath: /var/lib/istio/inject
            env:
            - name: INJECTION_WEBHOOK_CONFIG_NAME
              value: ""
            - name: VALIDATION_WEBHOOK_CONFIG_NAME
              value: ""
            - name: EXTERNAL_ISTIOD
              value: "true"
            - name: CLUSTER_ID
              value: ${REMOTE_CLUSTER_NAME}
            - name: SHARED_MESH_CONFIG
              value: istio
      values:
        global:
          caAddress: $EXTERNAL_ISTIOD_ADDR:15012
          istioNamespace: external-istiod
          operatorManageWebhooks: true
          meshID: mesh1
    EOF
    

    Then, apply the Istio configuration on the external cluster:

    $ istioctl install -f external-istiod.yaml --context="${CTX_EXTERNAL_CLUSTER}"
    
  4. Confirm that the external istiod has been successfully deployed:

    $ kubectl get po -n external-istiod --context="${CTX_EXTERNAL_CLUSTER}"
    NAME                      READY   STATUS    RESTARTS   AGE
    istiod-779bd6fdcf-bd6rg   1/1     Running   0          70s
    
  5. Create the Istio Gateway, VirtualService, and DestinationRule configuration to route traffic from the ingress gateway to the external control plane:

    $ cat <<EOF > external-istiod-gw.yaml
    apiVersion: networking.istio.io/v1beta1
    kind: Gateway
    metadata:
      name: external-istiod-gw
      namespace: external-istiod
    spec:
      selector:
        istio: ingressgateway
      servers:
        - port:
            number: 15012
            protocol: https
            name: https-XDS
          tls:
            mode: SIMPLE
            credentialName: $SSL_SECRET_NAME
          hosts:
          - $EXTERNAL_ISTIOD_ADDR
        - port:
            number: 15017
            protocol: https
            name: https-WEBHOOK
          tls:
            mode: SIMPLE
            credentialName: $SSL_SECRET_NAME
          hosts:
          - $EXTERNAL_ISTIOD_ADDR
    ---
    apiVersion: networking.istio.io/v1beta1
    kind: VirtualService
    metadata:
       name: external-istiod-vs
       namespace: external-istiod
    spec:
        hosts:
        - $EXTERNAL_ISTIOD_ADDR
        gateways:
        - external-istiod-gw
        http:
        - match:
          - port: 15012
          route:
          - destination:
              host: istiod.external-istiod.svc.cluster.local
              port:
                number: 15012
        - match:
          - port: 15017
          route:
          - destination:
              host: istiod.external-istiod.svc.cluster.local
              port:
                number: 443
    ---
    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
      name: external-istiod-dr
      namespace: external-istiod
    spec:
      host: istiod.external-istiod.svc.cluster.local
      trafficPolicy:
        portLevelSettings:
        - port:
            number: 15012
          tls:
            mode: SIMPLE
          connectionPool:
            http:
              h2UpgradePolicy: UPGRADE
        - port:
            number: 443
          tls:
            mode: SIMPLE
    EOF
    

    Then, apply the configuration on the external cluster:

    $ kubectl apply -f external-istiod-gw.yaml --context="${CTX_EXTERNAL_CLUSTER}"
    

Mesh admin steps

Now that Istio is up and running, a mesh administrator only needs to deploy and configure services in the mesh, including gateways, if needed.

Deploy a sample application

  1. Create, and label for injection, the sample namespace on the remote cluster:

    $ kubectl create --context="${CTX_REMOTE_CLUSTER}" namespace sample
    $ kubectl label --context="${CTX_REMOTE_CLUSTER}" namespace sample istio-injection=enabled
    
  2. Deploy the helloworld (v1) and sleep samples:

    ZipZipZip
    $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l service=helloworld -n sample --context="${CTX_REMOTE_CLUSTER}"
    $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample --context="${CTX_REMOTE_CLUSTER}"
    $ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context="${CTX_REMOTE_CLUSTER}"
    
  3. Wait a few seconds for the helloworld and sleep pods to be running with sidecars injected:

    $ kubectl get pod -n sample --context="${CTX_REMOTE_CLUSTER}"
    NAME                             READY   STATUS    RESTARTS   AGE
    helloworld-v1-776f57d5f6-s7zfc   2/2     Running   0          10s
    sleep-64d7d56698-wqjnm           2/2     Running   0          9s
    
  4. Send a request from the sleep pod to the helloworld service:

    $ kubectl exec --context="${CTX_REMOTE_CLUSTER}" -n sample -c sleep \
        "$(kubectl get pod --context="${CTX_REMOTE_CLUSTER}" -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" \
        -- curl -sS helloworld.sample:5000/hello
    Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc
    

Enable gateways

  1. Enable an ingress gateway on the remote cluster:

    $ cat <<EOF > istio-ingressgateway.yaml
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    spec:
      profile: empty
      components:
        ingressGateways:
        - namespace: external-istiod
          name: istio-ingressgateway
          enabled: true
      values:
        gateways:
          istio-ingressgateway:
            injectionTemplate: gateway
    EOF
    $ istioctl install -f istio-ingressgateway.yaml --context="${CTX_REMOTE_CLUSTER}"
    
  2. Enable an egress gateway, or other gateways, on the remote cluster (optional):

    $ cat <<EOF > istio-egressgateway.yaml
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    spec:
      profile: empty
      components:
        egressGateways:
        - namespace: external-istiod
          name: istio-egressgateway
          enabled: true
      values:
        gateways:
          istio-egressgateway:
            injectionTemplate: gateway
    EOF
    $ istioctl install -f istio-egressgateway.yaml --context="${CTX_REMOTE_CLUSTER}"
    
  3. Confirm that the Istio ingress gateway is running:

    $ kubectl get pod -l app=istio-ingressgateway -n external-istiod --context="${CTX_REMOTE_CLUSTER}"
    NAME                                    READY   STATUS    RESTARTS   AGE
    istio-ingressgateway-7bcd5c6bbd-kmtl4   1/1     Running   0          8m4s
    
  4. Expose the helloworld application on the ingress gateway:

    Zip
    $ kubectl apply -f @samples/helloworld/helloworld-gateway.yaml@ -n sample --context="${CTX_REMOTE_CLUSTER}"
    
  5. Set the GATEWAY_URL environment variable (see determining the ingress IP and ports for details):

    $ export INGRESS_HOST=$(kubectl -n external-istiod --context="${CTX_REMOTE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    $ export INGRESS_PORT=$(kubectl -n external-istiod --context="${CTX_REMOTE_CLUSTER}" get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
    $ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
    
  6. Confirm you can access the helloworld application through the ingress gateway:

    $ curl -s "http://${GATEWAY_URL}/hello"
    Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc
    

Adding clusters to the mesh (optional)

This section shows you how to expand an existing external control plane mesh to multicluster by adding another remote cluster. This allows you to easily distribute services and use location-aware routing and fail over to support high availability of your application.

External control plane with multiple remote clusters
External control plane with multiple remote clusters

Unlike the first remote cluster, the second and subsequent clusters added to the same external control plane do not provide mesh config, but instead are only sources of endpoint configuration, just like remote clusters in a primary-remote Istio multicluster configuration.

To proceed, you’ll need another Kubernetes cluster for the second remote cluster of the mesh. Set the following environment variables to the context name and cluster name of the cluster:

$ export CTX_SECOND_CLUSTER=<your second remote cluster context>
$ export SECOND_CLUSTER_NAME=<your second remote cluster name>

Register the new cluster

  1. Create the remote Istio install configuration, which installs the injection webhook that uses the external control plane’s injector, instead of a locally deployed one:

    $ cat <<EOF > second-config-cluster.yaml
    apiVersion: install.istio.io/v1alpha1
    kind: IstioOperator
    metadata:
      namespace: external-istiod
    spec:
      profile: external
      values:
        global:
          istioNamespace: external-istiod
        istiodRemote:
          injectionURL: https://${EXTERNAL_ISTIOD_ADDR}:15017/inject/:ENV:cluster=${SECOND_CLUSTER_NAME}:ENV:net=network2
    EOF
    

    Then, install the configuration on the remote cluster:

    $ kubectl create namespace external-istiod --context="${CTX_SECOND_CLUSTER}"
    $ istioctl manifest generate -f second-config-cluster.yaml | kubectl apply --context="${CTX_SECOND_CLUSTER}" -f -
    
  2. Confirm that the remote cluster’s webhook configuration has been installed:

    $ kubectl get mutatingwebhookconfiguration --context="${CTX_SECOND_CLUSTER}"
    NAME                                     WEBHOOKS   AGE
    istio-sidecar-injector-external-istiod   4          4m13s
    
  3. Create a secret with credentials to allow the control plane to access the endpoints on the second remote cluster and install it:

    $ istioctl x create-remote-secret \
      --context="${CTX_SECOND_CLUSTER}" \
      --name="${SECOND_CLUSTER_NAME}" \
      --type=remote \
      --namespace=external-istiod \
      --create-service-account=false | \
      kubectl apply -f - --context="${CTX_REMOTE_CLUSTER}" #TODO use --context="{CTX_EXTERNAL_CLUSTER}" when #31946 is fixed.
    

    Note that unlike the first remote cluster of the mesh, which also serves as the config cluster, the --type argument is set to remote this time, instead of config.

Setup east-west gateways

  1. Deploy east-west gateways on both remote clusters:

    Zip
    $ @samples/multicluster/gen-eastwest-gateway.sh@ \
        --mesh mesh1 --cluster "${REMOTE_CLUSTER_NAME}" --network network1 > eastwest-gateway-1.yaml
    $ istioctl manifest generate -f eastwest-gateway-1.yaml \
        --set values.global.istioNamespace=external-istiod | \
        kubectl apply --context="${CTX_REMOTE_CLUSTER}" -f -
    
    Zip
    $ @samples/multicluster/gen-eastwest-gateway.sh@ \
        --mesh mesh1 --cluster "${SECOND_CLUSTER_NAME}" --network network2 > eastwest-gateway-2.yaml
    $ istioctl manifest generate -f eastwest-gateway-2.yaml \
        --set values.global.istioNamespace=external-istiod | \
        kubectl apply --context="${CTX_SECOND_CLUSTER}" -f -
    
  2. Wait for the east-west gateways to be assigned external IP addresses:

    $ kubectl --context="${CTX_REMOTE_CLUSTER}" get svc istio-eastwestgateway -n external-istiod
    NAME                    TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)   AGE
    istio-eastwestgateway   LoadBalancer   10.0.12.121   34.122.91.98   ...       51s
    
    $ kubectl --context="${CTX_SECOND_CLUSTER}" get svc istio-eastwestgateway -n external-istiod
    NAME                    TYPE           CLUSTER-IP    EXTERNAL-IP    PORT(S)   AGE
    istio-eastwestgateway   LoadBalancer   10.0.12.121   34.122.91.99   ...       51s
    
  3. Expose services via the east-west gateways:

    Zip
    $ kubectl --context="${CTX_REMOTE_CLUSTER}" apply -n external-istiod -f \
        @samples/multicluster/expose-services.yaml@
    

Validate the installation

  1. Create, and label for injection, the sample namespace on the remote cluster:

    $ kubectl create --context="${CTX_SECOND_CLUSTER}" namespace sample
    $ kubectl label --context="${CTX_SECOND_CLUSTER}" namespace sample istio-injection=enabled
    
  2. Deploy the helloworld (v2) and sleep samples:

    ZipZipZip
    $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l service=helloworld -n sample --context="${CTX_SECOND_CLUSTER}"
    $ kubectl apply -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample --context="${CTX_SECOND_CLUSTER}"
    $ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context="${CTX_SECOND_CLUSTER}"
    
  3. Wait a few seconds for the helloworld and sleep pods to be running with sidecars injected:

    $ kubectl get pod -n sample --context="${CTX_SECOND_CLUSTER}"
    NAME                            READY   STATUS    RESTARTS   AGE
    helloworld-v2-54df5f84b-9hxgw   2/2     Running   0          10s
    sleep-557747455f-wtdbr          2/2     Running   0          9s
    
  4. Send a request from the sleep pod to the helloworld service:

    $ kubectl exec --context="${CTX_SECOND_CLUSTER}" -n sample -c sleep \
        "$(kubectl get pod --context="${CTX_SECOND_CLUSTER}" -n sample -l app=sleep -o jsonpath='{.items[0].metadata.name}')" \
        -- curl -sS helloworld.sample:5000/hello
    Hello version: v2, instance: helloworld-v2-54df5f84b-9hxgw
    
  5. Confirm that when accessing the helloworld application several times through the ingress gateway, both version v1 and v2 are now being called:

    $ for i in {1..10}; do curl -s "http://${GATEWAY_URL}/hello"; done
    Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc
    Hello version: v2, instance: helloworld-v2-54df5f84b-9hxgw
    Hello version: v1, instance: helloworld-v1-776f57d5f6-s7zfc
    Hello version: v2, instance: helloworld-v2-54df5f84b-9hxgw
    ...
    
Was this information useful?
Do you have any suggestions for improvement?

Thanks for your feedback!