Shared control plane (single and multiple networks)
Setup a multicluster Istio service mesh across multiple clusters with a shared control plane. In this configuration, multiple Kubernetes clusters running a remote configuration connect to a shared Istio control plane running in a main cluster. Clusters may be on the same network or different networks than other clusters in the mesh. Once one or more remote Kubernetes clusters are connected to the Istio control plane, Envoy can then form a mesh.
Prerequisites
Two or more clusters running a supported Kubernetes version (1.15, 1.16, 1.17, 1.18).
All Kubernetes control plane API servers must be routable to each other.
Clusters on the same network must be an RFC1918 network, VPN, or an alternative more advanced network technique meeting the following requirements:
- Individual cluster Pod CIDR ranges and service CIDR ranges must be unique across the network and may not overlap.
- All pod CIDRs in the same network must be routable to each other.
Clusters on different networks must have
istio-ingressgateway
services which are accessible from every other cluster, ideally using L4 network load balancers (NLB). Not all cloud providers support NLBs and some require special annotations to use them, so please consult your cloud provider’s documentation for enabling NLBs for service object type load balancers. When deploying on platforms without NLB support, it may be necessary to modify the health checks for the load balancer to register the ingress gateway.
Preparation
Certificate Authority
Generate intermediate CA certificates for each cluster’s CA from your organization’s root CA. The shared root CA enables mutual TLS communication across different clusters. For illustration purposes, the following instructions use the certificates from the Istio samples directory for both clusters.
Run the following commands on each cluster in the mesh to install the certificates. See Certificate Authority (CA) certificates for more details on configuring an external CA.
$ kubectl create namespace istio-system
$ kubectl create secret generic cacerts -n istio-system \
--from-file=@samples/certs/ca-cert.pem@ \
--from-file=@samples/certs/ca-key.pem@ \
--from-file=@samples/certs/root-cert.pem@ \
--from-file=@samples/certs/cert-chain.pem@
Cross-cluster control plane access
Decide how to expose the main cluster’s Istiod discovery service to the remote clusters. Pick one of the two options:
Option (1) - Use the
istio-ingressgateway
gateway shared with data traffic.Option (2) - Use a cloud provider’s internal load balancer on the Istiod service. For additional requirements and restrictions that may apply when using an internal load balancer between clusters, see Kubernetes internal load balancer documentation and your cloud provider’s documentation.
Cluster and network naming
Determine the name of the clusters and networks in the mesh. These names will be used
in the mesh network configuration and when configuring the mesh’s service registries.
Assign a unique name to each cluster. The name must be a
DNS label name.
In the example below the main cluster is called main0
and the remote cluster is remote0
.
$ export MAIN_CLUSTER_CTX=<...>
$ export REMOTE_CLUSTER_CTX=<...>
$ export MAIN_CLUSTER_NAME=main0
$ export REMOTE_CLUSTER_NAME=remote0
If the clusters are on different networks, assign a unique network name for each network.
$ export MAIN_CLUSTER_NETWORK=network1
$ export REMOTE_CLUSTER_NETWORK=network2
If clusters are on the same network, the same network name is used for those clusters.
$ export MAIN_CLUSTER_NETWORK=network1
$ export REMOTE_CLUSTER_NETWORK=network1
Deployment
Main cluster
Create the main cluster’s configuration. Pick one of the two options for cross-cluster control plane access.
Apply the main cluster’s configuration.
$ istioctl install -f istio-main-cluster.yaml --context=${MAIN_CLUSTER_CTX}
Wait for the control plane to be ready before proceeding.
$ kubectl get pod -n istio-system --context=${MAIN_CLUSTER_CTX}
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-7c8dd65766-lv9ck 1/1 Running 0 136m
istiod-f756bbfc4-thkmk 1/1 Running 0 136m
prometheus-b54c6f66b-q8hbt 2/2 Running 0 136m
Set the ISTIOD_REMOTE_EP
environment variable based on which remote control
plane configuration option was selected earlier.
Remote cluster
Create the remote cluster’s configuration.
cat <<EOF> istio-remote0-cluster.yaml
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
values:
global:
# The remote cluster's name and network name must match the values specified in the
# mesh network configuration of the main cluster.
multiCluster:
clusterName: ${REMOTE_CLUSTER_NAME}
network: ${REMOTE_CLUSTER_NETWORK}
# Replace ISTIOD_REMOTE_EP with the the value of ISTIOD_REMOTE_EP set earlier.
remotePilotAddress: ${ISTIOD_REMOTE_EP}
## The istio-ingressgateway is not required in the remote cluster if both clusters are on
## the same network. To disable the istio-ingressgateway component, uncomment the lines below.
#
# components:
# ingressGateways:
# - name: istio-ingressgateway
# enabled: false
EOF
Apply the remote cluster configuration.
$ istioctl install -f istio-remote0-cluster.yaml --context ${REMOTE_CLUSTER_CTX}
Wait for the remote cluster to be ready.
$ kubectl get pod -n istio-system --context=${REMOTE_CLUSTER_CTX}
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-55f784779d-s5hwl 1/1 Running 0 91m
istiod-7b4bfd7b4f-fwmks 1/1 Running 0 91m
prometheus-c6df65594-pdxc4 2/2 Running 0 91m
Cross-cluster load balancing
Configure ingress gateways
Cross-network traffic is securely routed through each destination cluster’s ingress gateway. When clusters in a mesh are on different networks you need to configure port 443 on the ingress gateway to pass incoming traffic through to the target service specified in a request’s SNI header, for SNI values of the local top-level domain (i.e., the Kubernetes DNS domain). Mutual TLS connections will be used all the way from the source to the destination sidecar.
Apply the following configuration to each cluster.
cat <<EOF> cluster-aware-gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: cluster-aware-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: tls
protocol: TLS
tls:
mode: AUTO_PASSTHROUGH
hosts:
- "*.local"
EOF
$ kubectl apply -f cluster-aware-gateway.yaml --context=${MAIN_CLUSTER_CTX}
$ kubectl apply -f cluster-aware-gateway.yaml --context=${REMOTE_CLUSTER_CTX}
Configure cross-cluster service registries
To enable cross-cluster load balancing, the Istio control plane requires
access to all clusters in the mesh to discover services, endpoints, and
pod attributes. To configure access, create a secret for each remote
cluster with credentials to access the remote cluster’s kube-apiserver
and
install it in the main cluster. This secret uses the credentials of the
istio-reader-service-account
in the remote cluster. --name
specifies the
remote cluster’s name. It must match the cluster name in main cluster’s IstioOperator
configuration.
$ istioctl x create-remote-secret --name ${REMOTE_CLUSTER_NAME} --context=${REMOTE_CLUSTER_CTX} | \
kubectl apply -f - --context=${MAIN_CLUSTER_CTX}
Deploy an example service
Deploy two instances of the helloworld
service, one in each cluster. The difference
between the two instances is the version of their helloworld
image.
Deploy helloworld v2 in the remote cluster
Create a
sample
namespace with a sidecar auto-injection label:$ kubectl create namespace sample --context=${REMOTE_CLUSTER_CTX} $ kubectl label namespace sample istio-injection=enabled --context=${REMOTE_CLUSTER_CTX}
Deploy
helloworld v2
:$ kubectl create -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample --context=${REMOTE_CLUSTER_CTX} $ kubectl create -f @samples/helloworld/helloworld.yaml@ -l version=v2 -n sample --context=${REMOTE_CLUSTER_CTX}
Confirm
helloworld v2
is running:$ kubectl get pod -n sample --context=${REMOTE_CLUSTER_CTX} NAME READY STATUS RESTARTS AGE helloworld-v2-7dd57c44c4-f56gq 2/2 Running 0 35s
Deploy helloworld v1 in the main cluster
Create a
sample
namespace with a sidecar auto-injection label:$ kubectl create namespace sample --context=${MAIN_CLUSTER_CTX} $ kubectl label namespace sample istio-injection=enabled --context=${MAIN_CLUSTER_CTX}
Deploy
helloworld v1
:$ kubectl create -f @samples/helloworld/helloworld.yaml@ -l app=helloworld -n sample --context=${MAIN_CLUSTER_CTX} $ kubectl create -f @samples/helloworld/helloworld.yaml@ -l version=v1 -n sample --context=${MAIN_CLUSTER_CTX}
Confirm
helloworld v1
is running:$ kubectl get pod -n sample --context=${MAIN_CLUSTER_CTX} NAME READY STATUS RESTARTS AGE helloworld-v1-d4557d97b-pv2hr 2/2 Running 0 40s
Cross-cluster routing in action
To demonstrate how traffic to the helloworld
service is distributed across the two clusters,
call the helloworld
service from another in-mesh sleep
service.
Deploy the
sleep
service in both clusters:$ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context=${MAIN_CLUSTER_CTX} $ kubectl apply -f @samples/sleep/sleep.yaml@ -n sample --context=${REMOTE_CLUSTER_CTX}
Wait for the
sleep
service to start in each cluster:$ kubectl get pod -n sample -l app=sleep --context=${MAIN_CLUSTER_CTX} sleep-754684654f-n6bzf 2/2 Running 0 5s
$ kubectl get pod -n sample -l app=sleep --context=${REMOTE_CLUSTER_CTX} sleep-754684654f-dzl9j 2/2 Running 0 5s
Call the
helloworld.sample
service several times from the main cluster:$ kubectl exec -it -n sample -c sleep --context=${MAIN_CLUSTER_CTX} $(kubectl get pod -n sample -l app=sleep --context=${MAIN_CLUSTER_CTX} -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
Call the
helloworld.sample
service several times from the remote cluster:$ kubectl exec -it -n sample -c sleep --context=${REMOTE_CLUSTER_CTX} $(kubectl get pod -n sample -l app=sleep --context=${REMOTE_CLUSTER_CTX} -o jsonpath='{.items[0].metadata.name}') -- curl helloworld.sample:5000/hello
If set up correctly, the traffic to the helloworld.sample
service will be distributed between instances
on the main and remote clusters resulting in responses with either v1
or v2
in the body:
Hello version: v2, instance: helloworld-v2-758dd55874-6x4t8
Hello version: v1, instance: helloworld-v1-86f77cd7bd-cpxhv
You can also verify the IP addresses used to access the endpoints with istioctl proxy-config
.
$ kubectl get pod -n sample -l app=sleep --context=${MAIN_CLUSTER_CTX} -o name | cut -f2 -d'/' | \
xargs -I{} istioctl -n sample --context=${MAIN_CLUSTER_CTX} proxy-config endpoints {} --cluster "outbound|5000||helloworld.sample.svc.cluster.local"
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.10.0.90:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
192.23.120.32:443 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
In the main cluster, the endpoints are the gateway IP of the remote cluster (192.23.120.32:443
) and
the helloworld pod IP in the main cluster (10.10.0.90:5000
).
$ kubectl get pod -n sample -l app=sleep --context=${REMOTE_CLUSTER_CTX} -o name | cut -f2 -d'/' | \
xargs -I{} istioctl -n sample --context=${REMOTE_CLUSTER_CTX} proxy-config endpoints {} --cluster "outbound|5000||helloworld.sample.svc.cluster.local"
ENDPOINT STATUS OUTLIER CHECK CLUSTER
10.32.0.9:5000 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
192.168.1.246:443 HEALTHY OK outbound|5000||helloworld.sample.svc.cluster.local
In the remote cluster, the endpoints are the gateway IP of the main cluster (192.168.1.246:443
) and
the pod IP in the remote cluster (10.32.0.9:5000
).
Congratulations!
You have configured a multi-cluster Istio mesh, installed samples and verified cross cluster traffic routing.
Additional considerations
Automatic injection
The Istiod service in each cluster provides automatic sidecar injection for proxies in its own cluster. Namespaces must be labeled in each cluster following the automatic sidecar injection guide
Access services from different clusters
Kubernetes resolves DNS on a cluster basis. Because the DNS resolution is tied
to the cluster, you must define the service object in every cluster where a
client runs, regardless of the location of the service’s endpoints. To ensure
this is the case, duplicate the service object to every cluster using
kubectl
. Duplication ensures Kubernetes can resolve the service name in any
cluster. Since the service objects are defined in a namespace, you must define
the namespace if it doesn’t exist, and include it in the service definitions in
all clusters.
Security
The Istiod service in each cluster provides CA functionality to proxies in its own cluster. The CA setup earlier ensures proxies across clusters in the mesh have the same root of trust.
Uninstalling the remote cluster
To uninstall the remote cluster, run the following command:
$ istioctl x create-remote-secret --name ${REMOTE_CLUSTER_NAME} --context=${REMOTE_CLUSTER_CTX} | \
kubectl delete -f - --context=${MAIN_CLUSTER_CTX}
$ istioctl manifest generate -f istio-remote0-cluster.yaml --context=${REMOTE_CLUSTER_CTX} | \
kubectl delete -f - --context=${REMOTE_CLUSTER_CTX}
$ kubectl delete namespace sample --context=${REMOTE_CLUSTER_CTX}
$ unset REMOTE_CLUSTER_CTX REMOTE_CLUSTER_NAME REMOTE_CLUSTER_NETWORK
$ rm istio-remote0-cluster.yaml
To uninstall the main cluster, run the following command:
$ istioctl manifest generate -f istio-main-cluster.yaml --context=${MAIN_CLUSTER_CTX} | \
kubectl delete -f - --context=${MAIN_CLUSTER_CTX}
$ kubectl delete namespace sample --context=${MAIN_CLUSTER_CTX}
$ unset MAIN_CLUSTER_CTX MAIN_CLUSTER_NAME MAIN_CLUSTER_NETWORK ISTIOD_REMOTE_EP
$ rm istio-main-cluster.yaml cluster-aware-gateway.yaml