IBM Cloud Private

This example demonstrates how to use Istio's multicluster feature to join two IBM Cloud Private clusters together, using the multicluster installation instructions.

Create the IBM Cloud Private Clusters

  1. Install two IBM Cloud Private clusters. NOTE: Make sure individual cluster Pod CIDR ranges and service CIDR ranges are unique and do not overlap across the multicluster environment and may not overlap. This can be configured by network_cidr and service_cluster_ip_range in cluster/config.yaml.

    ## Network in IPv4 CIDR format
    network_cidr: 10.1.0.0/16
    ## Kubernetes Settings
    service_cluster_ip_range: 10.0.0.1/24
  2. After IBM Cloud Private cluster install finishes, validate kubectl access to each cluster. In this example, consider two clusters cluster-1 and cluster-2.

    1. Configure cluster-1 with kubectl.

    2. Check the cluster status:

      $ kubectl get nodes
      $ kubectl get pods --all-namespaces
    3. Repeat above two steps to validate cluster-2.

Configure Pod Communication Across IBM Cloud Private Clusters

IBM Cloud Private uses Calico Node-to-Node Mesh by default to manage container networks. The BGP client on each node distributes the IP router information to all nodes.

To ensure pods can communicate across different clusters, you need to configure IP routers on all nodes in the cluster. You need two steps:

  1. Add IP routers from cluster-1 to cluster-2.

  2. Add IP routers from cluster-2 to cluster-1.

You can check how to add IP routers from cluster-1 to cluster-2 to validate pod to pod communication across clusters. With Node-to-Node Mesh mode, each node will have IP routers connecting to peer nodes in the cluster. In this example, both clusters have three nodes.

The hosts file for cluster-1:

9.111.255.21 gyliu-icp-1
9.111.255.129 gyliu-icp-2
9.111.255.29 gyliu-icp-3

The hosts file for cluster-2:

9.111.255.152 gyliu-ubuntu-3
9.111.255.155 gyliu-ubuntu-2
9.111.255.77 gyliu-ubuntu-1
  1. Obtain routing information on all nodes in cluster-1 with the command ip route | grep bird.

    $ ip route | grep bird
    10.1.43.0/26 via 9.111.255.29 dev tunl0 proto bird onlink
    10.1.158.192/26 via 9.111.255.129 dev tunl0 proto bird onlink
    blackhole 10.1.198.128/26 proto bird
    $ ip route | grep bird
    10.1.43.0/26 via 9.111.255.29 dev tunl0  proto bird onlink
    blackhole 10.1.158.192/26  proto bird
    10.1.198.128/26 via 9.111.255.21 dev tunl0  proto bird onlink
    $ ip route | grep bird
    blackhole 10.1.43.0/26  proto bird
    10.1.158.192/26 via 9.111.255.129 dev tunl0  proto bird onlink
    10.1.198.128/26 via 9.111.255.21 dev tunl0  proto bird onlink
  2. There are three IP routers total for those three nodes in cluster-1.

    10.1.158.192/26 via 9.111.255.129 dev tunl0  proto bird onlink
    10.1.198.128/26 via 9.111.255.21 dev tunl0  proto bird onlink
    10.1.43.0/26 via 9.111.255.29 dev tunl0  proto bird onlink
  3. Add those three IP routers to all nodes in cluster-2 by the command to follows:

    $ ip route add 10.1.158.192/26 via 9.111.255.129
    $ ip route add 10.1.198.128/26 via 9.111.255.21
    $ ip route add 10.1.43.0/26 via 9.111.255.29
  4. You can use the same steps to add all IP routers from cluster-2 to cluster-1. After configuration is complete, all the pods in those two different clusters can communication with each other.

  5. Verify across pod communication by pinging pod IP in cluster-2 from cluster-1. The following is a pod from cluster-2 with pod IP as 20.1.47.150.

    $ kubectl get pods -owide  -n kube-system | grep platform-ui
    platform-ui-lqccp                                             1/1       Running     0          3d        20.1.47.150     9.111.255.77
  6. From a node in cluster-1 ping the pod IP which should succeed.

    $ ping 20.1.47.150
    PING 20.1.47.150 (20.1.47.150) 56(84) bytes of data.
    64 bytes from 20.1.47.150: icmp_seq=1 ttl=63 time=0.759 ms

The steps in this section enables Pod communication across clusters by configuring a full IP routing mesh across all nodes in the two IBM Cloud Private Clusters.

Install Istio for multicluster

Follow the multicluster installation steps to install and configure local Istio control plane and Istio remote on cluster-1 and cluster-2.

This example uses cluster-1 as the local Istio control plane and cluster-2 as the Istio remote.

Deploy Bookinfo Example Across Clusters

NOTE: The following example enables automatic sidecar injection.

  1. Install bookinfo on the first cluster cluster-1. Remove reviews-v3 deployment to deploy on remote:

    $ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
    $ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
    $ kubectl delete deployment reviews-v3
  2. Create the reviews-v3.yaml manifest for deployment on the remote:

    ---
    ##################################################################################################
    # Ratings service
    ##################################################################################################
    apiVersion: v1
    kind: Service
    metadata:
      name: ratings
      labels:
        app: ratings
    spec:
      ports:
      - port: 9080
        name: http
    ---
    ##################################################################################################
    # Reviews service
    ##################################################################################################
    apiVersion: v1
    kind: Service
    metadata:
      name: reviews
      labels:
        app: reviews
    spec:
      ports:
      - port: 9080
        name: http
      selector:
        app: reviews
    ---
    apiVersion: extensions/v1beta1
    kind: Deployment
    metadata:
      name: reviews-v3
    spec:
      replicas: 1
      template:
        metadata:
          labels:
            app: reviews
            version: v3
        spec:
          containers:
          - name: reviews
            image: istio/examples-bookinfo-reviews-v3:1.5.0
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 9080

    Note: The ratings service definition is added to the remote cluster because reviews-v3 is a client of ratings and creating the service object creates a DNS entry. The Istio sidecar in the reviews-v3 pod will determine the proper ratings endpoint after the DNS lookup is resolved to a service address. This would not be necessary if a multicluster DNS solution were additionally set up, e.g. as in a federated Kubernetes environment.

  3. Install the reviews-v3 deployment on the remote cluster-2.

    $ kubectl apply -f $HOME/reviews-v3.yaml
  4. Determine the ingress IP and ports for istio-ingressgateway's INGRESS_HOST and INGRESS_PORT variables for accessing the gateway.

    Access http://<INGRESS_HOST>:<INGRESS_PORT>/productpage repeatedly and each version of reviews should be equally load balanced, including reviews-v3 in the remote cluster (red stars). It may take several accesses (dozens) to demonstrate the equal load balancing between reviews versions.

See also

Example multicluster GKE install of Istio.

Example multicluster between IBM Cloud Kubernetes Service & IBM Cloud Private.

Install Istio with multicluster support.

Instructions to download the Istio release.

Instructions to setup a Google Kubernetes Engine cluster for Istio.

Describes the options available when installing Istio using the included Helm chart.