Virtual Machines in Multi-Network Meshes

This example provides instructions to integrate a VM or a bare metal host into a multi-network Istio mesh deployed on Kubernetes using gateways. This approach doesn’t require VPN connectivity or direct network access between the VM, the bare metal and the clusters.

Prerequisites

  • One or more Kubernetes clusters with versions: 1.14, 1.15, 1.16.

  • Virtual machines (VMs) must have IP connectivity to the Ingress gateways in the mesh.

Installation steps

Setup consists of preparing the mesh for expansion and installing and configuring each VM.

Customized installation of Istio on the cluster

The first step when adding non-Kubernetes services to an Istio mesh is to configure the Istio installation itself, and generate the configuration files that let VMs connect to the mesh. Prepare the cluster for the VM with the following commands on a machine with cluster admin privileges:

  1. Create a Kubernetes secret for your generated CA certificates using a command similar to the following. See Certificate Authority (CA) certificates for more details.

    ZipZipZipZip
    $ kubectl create namespace istio-system
    $ kubectl create secret generic cacerts -n istio-system \
        --from-file=@samples/certs/ca-cert.pem@ \
        --from-file=@samples/certs/ca-key.pem@ \
        --from-file=@samples/certs/root-cert.pem@ \
        --from-file=@samples/certs/cert-chain.pem@
    
  2. Deploy Istio control plane into the cluster

    $ istioctl manifest apply \
        -f install/kubernetes/operator/examples/vm/values-istio-meshexpansion-gateways.yaml \
        --set coreDNS.enabled=true
    

    For further details and customization options, refer to the installation instructions.

  3. Create vm namespace for the VM services.

    $ kubectl create ns vm
    
  4. Define the namespace the VM joins. This example uses the SERVICE_NAMESPACE environment variable to store the namespace. The value of this variable must match the namespace you use in the configuration files later on.

    $ export SERVICE_NAMESPACE="vm"
    
  5. Extract the initial keys the service account needs to use on the VMs.

    $ kubectl -n $SERVICE_NAMESPACE get secret istio.default  \
        -o jsonpath='{.data.root-cert\.pem}' | base64 --decode > root-cert.pem
    $ kubectl -n $SERVICE_NAMESPACE get secret istio.default  \
        -o jsonpath='{.data.key\.pem}' | base64 --decode > key.pem
    $ kubectl -n $SERVICE_NAMESPACE get secret istio.default  \
          -o jsonpath='{.data.cert-chain\.pem}' | base64 --decode > cert-chain.pem
    
  6. Determine and store the IP address of the Istio ingress gateway since the VMs access Citadel and Pilot and workloads on cluster through this IP address.

    $ export GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    $ echo $GWIP
    35.232.112.158
    
  7. Generate a cluster.env configuration to deploy in the VMs. This file contains the Kubernetes cluster IP address ranges to intercept and redirect via Envoy.

    $ echo -e "ISTIO_CP_AUTH=MUTUAL_TLS\nISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env
    
  8. Check the contents of the generated cluster.env file. It should be similar to the following example:

    $ cat cluster.env
    ISTIO_CP_AUTH=MUTUAL_TLS
    ISTIO_SERVICE_CIDR=172.21.0.0/16
    
  9. If the VM only calls services in the mesh, you can skip this step. Otherwise, add the ports the VM exposes to the cluster.env file with the following command. You can change the ports later if necessary.

    $ echo "ISTIO_INBOUND_PORTS=8888" >> cluster.env
    

Setup DNS

Reference Setup DNS to set up DNS for the cluster.

Setting up the VM

Next, run the following commands on each machine that you want to add to the mesh:

  1. Copy the previously created cluster.env and *.pem files to the VM.

  2. Install the Debian package with the Envoy sidecar.

    $ curl -L https://storage.googleapis.com/istio-release/releases/1.5.4/deb/istio-sidecar.deb > istio-sidecar.deb
    $ sudo dpkg -i istio-sidecar.deb
    
  3. Add the IP address of the Istio gateway to /etc/hosts. Revisit the Customized installation of Istio on the Cluster section to learn how to obtain the IP address. The following example updates the /etc/hosts file with the Istio gateway address:

    $ echo "35.232.112.158 istio-citadel istio-pilot istio-pilot.istio-system" | sudo tee -a /etc/hosts
    
  4. Install root-cert.pem, key.pem and cert-chain.pem under /etc/certs/.

    $ sudo mkdir -p /etc/certs
    $ sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
    
  5. Install cluster.env under /var/lib/istio/envoy/.

    $ sudo cp cluster.env /var/lib/istio/envoy
    
  6. Transfer ownership of the files in /etc/certs/ and /var/lib/istio/envoy/ to the Istio proxy.

    $ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy
    
  7. Start Istio using systemctl.

    $ sudo systemctl start istio-auth-node-agent
    $ sudo systemctl start istio
    

Added Istio resources

The Istio resources below are added to support adding VMs to the mesh with gateways. These resources remove the flat network requirement between the VM and cluster.

Resource KindResource NameFunction
configmapcorednsSend *.global request to istiocordns service
serviceistiocorednsResolve *.global to Istio Ingress gateway
gateway.networking.istio.iomeshexpansion-gatewayOpen port for Pilot, Citadel and Mixer
gateway.networking.istio.ioistio-multicluster-ingressgatewayOpen port 15443 for inbound *.global traffic
envoyfilter.networking.istio.ioistio-multicluster-ingressgatewayTransform *.global to *. svc.cluster.local
destinationrule.networking.istio.ioistio-multicluster-destinationruleSet traffic policy for 15443 traffic
destinationrule.networking.istio.iomeshexpansion-dr-pilotSet traffic policy for istio-pilot
destinationrule.networking.istio.ioistio-policySet traffic policy for istio-policy
destinationrule.networking.istio.ioistio-telemetrySet traffic policy for istio-telemetry
virtualservice.networking.istio.iomeshexpansion-vs-pilotSet route info for istio-pilot
virtualservice.networking.istio.iomeshexpansion-vs-citadelSet route info for istio-citadel

Expose service running on cluster to VMs

Every service in the cluster that needs to be accessed from the VM requires a service entry configuration in the cluster. The host used in the service entry should be of the form <name>.<namespace>.global where name and namespace correspond to the service’s name and namespace respectively.

To demonstrate access from VM to cluster services, configure the the httpbin service in the cluster.

  1. Deploy the httpbin service in the cluster

    Zip
    $ kubectl create namespace bar
    $ kubectl label namespace bar istio-injection=enabled
    $ kubectl apply -n bar -f @samples/httpbin/httpbin.yaml@
    
  2. Create a service entry for the httpbin service in the cluster.

    To allow services in VM to access httpbin in the cluster, we need to create a service entry for it. The host name of the service entry should be of the form <name>.<namespace>.global where name and namespace correspond to the remote service’s name and namespace respectively.

    For DNS resolution for services under the *.global domain, you need to assign these services an IP address.

    If the global services have actual VIPs, you can use those, but otherwise we suggest using IPs from the loopback range 127.0.0.0/8 that are not already allocated. These IPs are non-routable outside of a pod. In this example we’ll use IPs in 127.255.0.0/16 which avoids conflicting with well known IPs such as 127.0.0.1 (localhost). Application traffic for these IPs will be captured by the sidecar and routed to the appropriate remote service.

    $ kubectl apply  -n bar -f - <<EOF
    apiVersion: networking.istio.io/v1alpha3
    kind: ServiceEntry
    metadata:
      name: httpbin.bar.forvms
    spec:
      hosts:
      # must be of form name.namespace.global
      - httpbin.bar.global
      location: MESH_INTERNAL
      ports:
      - name: http1
        number: 8000
        protocol: http
      resolution: DNS
      addresses:
      # the IP address to which httpbin.bar.global will resolve to
      # must be unique for each service, within a given cluster.
      # This address need not be routable. Traffic for this IP will be captured
      # by the sidecar and routed appropriately.
      # This address will also be added into VM's /etc/hosts
      - 127.255.0.3
      endpoints:
      # This is the routable address of the ingress gateway in the cluster.
      # Traffic from the VMs will be
      # routed to this address.
      - address: ${CLUSTER_GW_ADDR}
        ports:
          http1: 15443 # Do not change this port value
    EOF
    

    The configurations above will result in all traffic from VMs for httpbin.bar.global on any port to be routed to the endpoint <IPofClusterIngressGateway>:15443 over a mutual TLS connection.

    The gateway for port 15443 is a special SNI-aware Envoy preconfigured and installed as part of the meshexpansion with gateway Istio installation step in the Customized installation of Istio on the Cluster section. Traffic entering port 15443 will be load balanced among pods of the appropriate internal service of the target cluster (in this case, httpbin.bar in the cluster).

Send requests from VM to Kubernetes services

After setup, the machine can access services running in the Kubernetes cluster.

The following example shows accessing a service running in the Kubernetes cluster from a VM using /etc/hosts/, in this case using a service from the httpbin service.

  1. On the added VM, add the service name and address to its /etc/hosts file. You can then connect to the cluster service from the VM, as in the example below:

    $ echo "127.255.0.3 httpbin.bar.global" | sudo tee -a /etc/hosts
    $ curl -v httpbin.bar.global:8000
    < HTTP/1.1 200 OK
    < server: envoy
    < content-type: text/html; charset=utf-8
    < content-length: 9593
    
    ... html content ...
    

The server: envoy header indicates that the sidecar intercepted the traffic.

Running services on the added VM

  1. Setup an HTTP server on the VM instance to serve HTTP traffic on port 8888:

    $ python -m SimpleHTTPServer 8888
    
  2. Determine the VM instance’s IP address.

  3. Add VM services to the mesh

    $ istioctl experimental add-to-mesh external-service vmhttp ${VM_IP} http:8888 -n ${SERVICE_NAMESPACE}
    
  4. Deploy a pod running the sleep service in the Kubernetes cluster, and wait until it is ready:

    Zip
    $ kubectl apply -f @samples/sleep/sleep.yaml@
    $ kubectl get pod
    NAME                             READY     STATUS    RESTARTS   AGE
    sleep-88ddbcfdd-rm42k            2/2       Running   0          1s
    ...
    
  5. Send a request from the sleep service on the pod to the VM’s HTTP service:

    $ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local:8888
    

    If configured properly, you will see something similar to the output below.

    <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html>
    <title>Directory listing for /</title>
    <body>
    <h2>Directory listing for /</h2>
    <hr>
    <ul>
    <li><a href=".bashrc">.bashrc</a></li>
    <li><a href=".ssh/">.ssh/</a></li>
    ...
    </body>
    

Congratulations! You successfully configured a service running in a pod within the cluster to send traffic to a service running on a VM outside of the cluster and tested that the configuration worked.

Cleanup

Run the following commands to remove the expansion VM from the mesh’s abstract model.

$ istioctl experimental remove-from-mesh -n ${SERVICE_NAMESPACE} vmhttp
Kubernetes Service "vmhttp.vm" has been deleted for external service "vmhttp"
Service Entry "mesh-expansion-vmhttp" has been deleted for external service "vmhttp"
Esta informação foi útil?
Você tem alguma sugestão de melhoria?

Obrigado pelo seu feedback!