Replicated control planes
Follow this guide to install an Istio multicluster deployment with replicated control plane instances in every cluster and using gateways to connect services across clusters.
Instead of using a shared Istio control plane to manage the mesh, in this configuration each cluster has its own Istio control plane installation, each managing its own endpoints. All of the clusters are under a shared administrative control for the purposes of policy enforcement and security.
A single Istio service mesh across the clusters is achieved by replicating shared services and namespaces and using a common root CA in all of the clusters. Cross-cluster communication occurs over the Istio gateways of the respective clusters.
Prerequisites
Two or more Kubernetes clusters with versions: 1.15, 1.16, 1.17, 1.18.
Authority to deploy the Istio control plane on each Kubernetes cluster.
The IP address of the
istio-ingressgateway
service in each cluster must be accessible from every other cluster, ideally using L4 network load balancers (NLB). Not all cloud providers support NLBs and some require special annotations to use them, so please consult your cloud provider’s documentation for enabling NLBs for service object type load balancers. When deploying on platforms without NLB support, it may be necessary to modify the health checks for the load balancer to register the ingress gateway.A Root CA. Cross cluster communication requires mutual TLS connection between services. To enable mutual TLS communication across clusters, each cluster’s Istio CA will be configured with intermediate CA credentials generated by a shared root CA. For illustration purposes, you use a sample root CA certificate available in the Istio installation under the
samples/certs
directory.
Deploy the Istio control plane in each cluster
Generate intermediate CA certificates for each cluster’s Istio CA from your organization’s root CA. The shared root CA enables mutual TLS communication across different clusters.
For illustration purposes, the following instructions use the certificates from the Istio samples directory for both clusters. In real world deployments, you would likely use a different CA certificate for each cluster, all signed by a common root CA.
Run the following commands in every cluster to deploy an identical Istio control plane configuration in all of them.
Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See Certificate Authority (CA) certificates for more details.
$ kubectl create namespace istio-system $ kubectl create secret generic cacerts -n istio-system \ --from-file=@samples/certs/ca-cert.pem@ \ --from-file=@samples/certs/ca-key.pem@ \ --from-file=@samples/certs/root-cert.pem@ \ --from-file=@samples/certs/cert-chain.pem@
Install Istio:
$ istioctl install \ -f manifests/examples/multicluster/values-istio-multicluster-gateways.yaml
For further details and customization options, refer to the installation instructions.
Setup DNS
Providing DNS resolution for services in remote clusters will allow
existing applications to function unmodified, as applications typically
expect to resolve services by their DNS names and access the resulting
IP. Istio itself does not use the DNS for routing requests between
services. Services local to a cluster share a common DNS suffix
(e.g., svc.cluster.local
). Kubernetes DNS provides DNS resolution for these
services.
To provide a similar setup for services from remote clusters, you name
services from remote clusters in the format
<name>.<namespace>.global
. Istio also ships with a CoreDNS server that
will provide DNS resolution for these services. In order to utilize this
DNS, Kubernetes’ DNS must be configured to stub a domain
for .global
.
Create one of the following ConfigMaps, or update an existing one, in each cluster that will be calling services in remote clusters (every cluster in the general case):
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{"global": ["$(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})"]}
EOF
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
global:53 {
errors
cache 30
proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})
}
EOF
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
global:53 {
errors
cache 30
forward . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP}):53
}
EOF
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
global:53 {
errors
cache 30
forward . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP}):53
}
EOF
Configure application services
Every service in a given cluster that needs to be accessed from a different remote
cluster requires a ServiceEntry
configuration in the remote cluster.
The host used in the service entry should be of the form <name>.<namespace>.global
where name and namespace correspond to the service’s name and namespace respectively.
To demonstrate cross cluster access, configure the sleep service running in one cluster to call the httpbin service running in a second cluster. Before you begin:
- Choose two of your Istio clusters, to be referred to as
cluster1
andcluster2
.
You can use the
kubectl
command to access both thecluster1
andcluster2
clusters with the--context
flag, for examplekubectl get pods --context cluster1
. Use the following command to list your contexts:$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * cluster1 cluster1 user@foo.com default cluster2 cluster2 user@foo.com default
Store the context names of your clusters in environment variables:
$ export CTX_CLUSTER1=$(kubectl config view -o jsonpath='{.contexts[0].name}') $ export CTX_CLUSTER2=$(kubectl config view -o jsonpath='{.contexts[1].name}') $ echo CTX_CLUSTER1 = ${CTX_CLUSTER1}, CTX_CLUSTER2 = ${CTX_CLUSTER2} CTX_CLUSTER1 = cluster1, CTX_CLUSTER2 = cluster2
Configure the example services
Deploy the
sleep
service incluster1
.$ kubectl create --context=$CTX_CLUSTER1 namespace foo $ kubectl label --context=$CTX_CLUSTER1 namespace foo istio-injection=enabled $ kubectl apply --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@ $ export SLEEP_POD=$(kubectl get --context=$CTX_CLUSTER1 -n foo pod -l app=sleep -o jsonpath={.items..metadata.name})
Deploy the
httpbin
service incluster2
.$ kubectl create --context=$CTX_CLUSTER2 namespace bar $ kubectl label --context=$CTX_CLUSTER2 namespace bar istio-injection=enabled $ kubectl apply --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@
Export the
cluster2
gateway address:$ export CLUSTER2_GW_ADDR=$(kubectl get --context=$CTX_CLUSTER2 svc --selector=app=istio-ingressgateway \ -n istio-system -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
This command sets the value to the gateway’s public IP, but note that you can set it to a DNS name instead, if you have one.
Create a service entry for the
httpbin
service incluster1
.To allow
sleep
incluster1
to accesshttpbin
incluster2
, we need to create a service entry for it. The host name of the service entry should be of the form<name>.<namespace>.global
where name and namespace correspond to the remote service’s name and namespace respectively.For DNS resolution for services under the
*.global
domain, you need to assign these services an IP address.If the global services have actual VIPs, you can use those, but otherwise we suggest using IPs from the class E addresses range
240.0.0.0/4
. Application traffic for these IPs will be captured by the sidecar and routed to the appropriate remote service.$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: httpbin-bar spec: hosts: # must be of form name.namespace.global - httpbin.bar.global # Treat remote cluster services as part of the service mesh # as all clusters in the service mesh share the same root of trust. location: MESH_INTERNAL ports: - name: http1 number: 8000 protocol: http resolution: DNS addresses: # the IP address to which httpbin.bar.global will resolve to # must be unique for each remote service, within a given cluster. # This address need not be routable. Traffic for this IP will be captured # by the sidecar and routed appropriately. - 240.0.0.2 endpoints: # This is the routable address of the ingress gateway in cluster2 that # sits in front of sleep.foo service. Traffic from the sidecar will be # routed to this address. - address: ${CLUSTER2_GW_ADDR} ports: http1: 15443 # Do not change this port value EOF
The configurations above will result in all traffic in
cluster1
forhttpbin.bar.global
on any port to be routed to the endpoint$CLUSTER2_GW_ADDR:15443
over a mutual TLS connection.The gateway for port 15443 is a special SNI-aware Envoy preconfigured and installed when you deployed the Istio control plane in the cluster. Traffic entering port 15443 will be load balanced among pods of the appropriate internal service of the target cluster (in this case,
httpbin.bar
incluster2
).Verify that
httpbin
is accessible from thesleep
service.$ kubectl exec --context=$CTX_CLUSTER1 $SLEEP_POD -n foo -c sleep -- curl -I httpbin.bar.global:8000/headers
Send remote traffic via an egress gateway
If you want to route traffic from cluster1
via a dedicated egress gateway, instead of directly from the sidecars,
use the following service entry for httpbin.bar
instead of the one in the previous section.
If $CLUSTER2_GW_ADDR
is an IP address, use the $CLUSTER2_GW_ADDR - IP address
option. If $CLUSTER2_GW_ADDR
is a hostname, use the $CLUSTER2_GW_ADDR - hostname
option.
- Export the
cluster1
egress gateway address:
$ export CLUSTER1_EGW_ADDR=$(kubectl get --context=$CTX_CLUSTER1 svc --selector=app=istio-egressgateway \
-n istio-system -o yaml -o jsonpath='{.items[0].spec.clusterIP}')
- Apply the httpbin-bar service entry:
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: httpbin-bar
spec:
hosts:
# must be of form name.namespace.global
- httpbin.bar.global
location: MESH_INTERNAL
ports:
- name: http1
number: 8000
protocol: http
resolution: STATIC
addresses:
- 240.0.0.2
endpoints:
- address: ${CLUSTER2_GW_ADDR}
network: external
ports:
http1: 15443 # Do not change this port value
- address: ${CLUSTER1_EGW_ADDR}
ports:
http1: 15443
EOF
If the ${CLUSTER2_GW_ADDR}
is a hostname, you can use resolution: DNS
for the endpoint resolution:
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: httpbin-bar
spec:
hosts:
# must be of form name.namespace.global
- httpbin.bar.global
location: MESH_INTERNAL
ports:
- name: http1
number: 8000
protocol: http
resolution: DNS
addresses:
- 240.0.0.2
endpoints:
- address: ${CLUSTER2_GW_ADDR}
network: external
ports:
http1: 15443 # Do not change this port value
- address: istio-egressgateway.istio-system.svc.cluster.local
ports:
http1: 15443
EOF
Cleanup the example
Execute the following commands to clean up the example services.
Cleanup
cluster1
:$ kubectl delete --context=$CTX_CLUSTER1 -n foo -f @samples/sleep/sleep.yaml@ $ kubectl delete --context=$CTX_CLUSTER1 -n foo serviceentry httpbin-bar $ kubectl delete --context=$CTX_CLUSTER1 ns foo
Cleanup
cluster2
:$ kubectl delete --context=$CTX_CLUSTER2 -n bar -f @samples/httpbin/httpbin.yaml@ $ kubectl delete --context=$CTX_CLUSTER2 ns bar
Cleanup
environment variables
:$ unset SLEEP_POD CLUSTER2_GW_ADDR CLUSTER1_EGW_ADDR CTX_CLUSTER1 CTX_CLUSTER2
Version-aware routing to remote services
If the remote service has multiple versions, you can add labels to the service entry endpoints. For example:
$ kubectl apply --context=$CTX_CLUSTER1 -n foo -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: httpbin-bar
spec:
hosts:
# must be of form name.namespace.global
- httpbin.bar.global
location: MESH_INTERNAL
ports:
- name: http1
number: 8000
protocol: http
resolution: DNS
addresses:
# the IP address to which httpbin.bar.global will resolve to
# must be unique for each service.
- 240.0.0.2
endpoints:
- address: ${CLUSTER2_GW_ADDR}
labels:
cluster: cluster2
ports:
http1: 15443 # Do not change this port value
EOF
You can then create virtual services and destination rules
to define subsets of the httpbin.bar.global
service using the appropriate gateway label selectors.
The instructions are the same as those used for routing to a local service.
See multicluster version routing
for a complete example.
Uninstalling
Uninstall Istio by running the following commands on every cluster:
$ istioctl manifest generate \
-f manifests/examples/multicluster/values-istio-multicluster-gateways.yaml \
| kubectl delete -f -
Summary
Using Istio gateways, a common root CA, and service entries, you can configure a single Istio service mesh across multiple Kubernetes clusters. Once configured this way, traffic can be transparently routed to remote clusters without any application involvement. Although this approach requires a certain amount of manual configuration for remote service access, the service entry creation process could be automated.