IBM Cloud Private
This example demonstrates how to setup network connectivity between two IBM Cloud Private clusters and then compose them into a multicluster mesh using a single-network shared control plane topology.
Create the IBM Cloud Private Clusters
Install two IBM Cloud Private clusters.
# Default IPv4 CIDR is 10.1.0.0/16 # Default IPv6 CIDR is fd03::0/112 network_cidr: 10.1.0.0/16 ## Kubernetes Settings # Default IPv4 Service Cluster Range is 10.0.0.0/16 # Default IPv6 Service Cluster Range is fd02::0/112 service_cluster_ip_range: 10.0.0.0/16
After IBM Cloud Private cluster install finishes, validate
kubectl
access to each cluster. In this example, consider two clusterscluster-1
andcluster-2
.Check the cluster status:
$ kubectl get nodes $ kubectl get pods --all-namespaces
Repeat above two steps to validate
cluster-2
.
Configure Pod Communication Across IBM Cloud Private Clusters
IBM Cloud Private uses Calico Node-to-Node Mesh by default to manage container networks. The BGP client on each node distributes the IP router information to all nodes.
To ensure pods can communicate across different clusters, you need to configure IP routers on all nodes across the two clusters. In summary, you need the following two steps to configure pod communication across two IBM Cloud Private Clusters:
Add IP routers from
cluster-1
tocluster-2
.Add IP routers from
cluster-2
tocluster-1
.
You can check how to add IP routers from cluster-1
to cluster-2
to validate pod to pod communication
across clusters. With Node-to-Node Mesh mode, each node will have IP routers connecting to peer nodes in
the cluster. In this example, both clusters have three nodes.
The hosts
file for cluster-1
:
172.16.160.23 micpnode1
172.16.160.27 micpnode2
172.16.160.29 micpnode3
The hosts
file for cluster-2
:
172.16.187.14 nicpnode1
172.16.187.16 nicpnode2
172.16.187.18 nicpnode3
Obtain routing information on all nodes in
cluster-1
with the commandip route | grep bird
.$ ip route | grep bird blackhole 10.1.103.128/26 proto bird 10.1.176.64/26 via 172.16.160.29 dev tunl0 proto bird onlink 10.1.192.0/26 via 172.16.160.27 dev tunl0 proto bird onlink
$ ip route | grep bird 10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink 10.1.176.64/26 via 172.16.160.29 dev tunl0 proto bird onlink blackhole 10.1.192.0/26 proto bird
$ ip route | grep bird 10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink blackhole 10.1.176.64/26 proto bird 10.1.192.0/26 via 172.16.160.27 dev tunl0 proto bird onlink
There are three IP routers total for those three nodes in
cluster-1
.10.1.176.64/26 via 172.16.160.29 dev tunl0 proto bird onlink 10.1.103.128/26 via 172.16.160.23 dev tunl0 proto bird onlink 10.1.192.0/26 via 172.16.160.27 dev tunl0 proto bird onlink
Add those three IP routers to all nodes in
cluster-2
by the command to follows:$ ip route add 10.1.176.64/26 via 172.16.160.29 $ ip route add 10.1.103.128/26 via 172.16.160.23 $ ip route add 10.1.192.0/26 via 172.16.160.27
You can use the same steps to add all IP routers from
cluster-2
tocluster-1
. After the configuration is complete, all the pods in those two different clusters can communicate with each other.Verify across pod communication by pinging pod IP in
cluster-2
fromcluster-1
. The following is a pod fromcluster-2
with pod IP as20.1.58.247
.$ kubectl -n kube-system get pod -owide | grep dns kube-dns-ksmq6 1/1 Running 2 28d 20.1.58.247 172.16.187.14 <none>
From a node in
cluster-1
ping the pod IP which should succeed.$ ping 20.1.58.247 PING 20.1.58.247 (20.1.58.247) 56(84) bytes of data. 64 bytes from 20.1.58.247: icmp_seq=1 ttl=63 time=1.73 ms
The steps above in this section enables pod communication across the two clusters by configuring a full IP routing mesh across all nodes in the two IBM Cloud Private Clusters.
Install Istio for multicluster
Follow the single-network shared control plane instructions to install and configure
local Istio control plane and Istio remote on cluster-1
and cluster-2
.
In this guide, it is assumed that the local Istio control plane is deployed in cluster-1
, while the Istio remote is deployed in cluster-2
.
Deploy the Bookinfo example across clusters
The following example enables automatic sidecar injection.
Install
bookinfo
on the first clustercluster-1
. Remove thereviews-v3
deployment which will be deployed on clustercluster-2
in the following step:$ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo.yaml@ $ kubectl apply -f @samples/bookinfo/networking/bookinfo-gateway.yaml@ $ kubectl delete deployment reviews-v3
Deploy the
reviews-v3
service along with any corresponding services on the remotecluster-2
cluster:$ cat <<EOF | kubectl apply -f - --- ################################################################################################## # Ratings service ################################################################################################## apiVersion: v1 kind: Service metadata: name: ratings labels: app: ratings service: ratings spec: ports: - port: 9080 name: http --- ################################################################################################## # Reviews service ################################################################################################## apiVersion: v1 kind: Service metadata: name: reviews labels: app: reviews service: reviews spec: ports: - port: 9080 name: http selector: app: reviews --- apiVersion: apps/v1 kind: Deployment metadata: name: reviews-v3 labels: app: reviews version: v3 spec: replicas: 1 selector: matchLabels: app: reviews version: v3 template: metadata: labels: app: reviews version: v3 spec: containers: - name: reviews image: istio/examples-bookinfo-reviews-v3:1.12.0 imagePullPolicy: IfNotPresent ports: - containerPort: 9080 --- EOF
Note: The
ratings
service definition is added to the remote cluster becausereviews-v3
is client ofratings
service, thus a DNS entry forratings
service is required forreviews-v3
. The Istio sidecar in thereviews-v3
pod will determine the properratings
endpoint after the DNS lookup is resolved to a service address. This would not be necessary if a multicluster DNS solution were additionally set up, e.g. as in a federated Kubernetes environment.Determine the ingress IP and ports for
istio-ingressgateway
’sINGRESS_HOST
andINGRESS_PORT
variables to access the gateway.Access
http://<INGRESS_HOST>:<INGRESS_PORT>/productpage
repeatedly and each version ofreviews
should be equally load balanced, includingreviews-v3
in the remote cluster (red stars). It may take several accesses (dozens) to demonstrate the equal load balancing betweenreviews
versions.