Virtual Machines in Multi-Network Meshes
This example provides instructions to integrate a VM or a bare metal host into a multi-network Istio mesh deployed on Kubernetes using gateways. This approach doesn’t require VPN connectivity or direct network access between the VM, the bare metal and the clusters.
Prerequisites
One or more Kubernetes clusters with versions: 1.13, 1.14, 1.15.
Virtual machines (VMs) must have IP connectivity to the Ingress gateways in the mesh.
Install the Helm client. Helm is needed to add VMs to your mesh.
Installation steps
Setup consists of preparing the mesh for expansion and installing and configuring each VM.
Customized installation of Istio on the cluster
The first step when adding non-Kubernetes services to an Istio mesh is to configure the Istio installation itself, and generate the configuration files that let VMs connect to the mesh. Prepare the cluster for the VM with the following commands on a machine with cluster admin privileges:
Generate a
meshexpansion-gateways
Istio configuration file usinghelm
:$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system \ -f https://github.com/irisdingbj/meshExpansion/blob/master/values-istio-meshexpansion-gateways.yaml \ > $HOME/istio-mesh-expansion-gatways.yaml
For further details and customization options, refer to the Installation with Helm instructions.
Deploy Istio control plane into the cluster
$ kubectl create namespace istio-system $ helm template install/kubernetes/helm/istio-init --name istio-init --namespace istio-system | kubectl apply -f - $ kubectl apply -f $HOME/istio-mesh-expansion-gatways.yaml
Verify Istio is installed successfully
$ istioctl verify-install -f $HOME/istio-mesh-expansion-gatways.yaml
Create
vm
namespace for the VM services.$ kubectl create ns vm
Define the namespace the VM joins. This example uses the
SERVICE_NAMESPACE
environment variable to store the namespace. The value of this variable must match the namespace you use in the configuration files later on.$ export SERVICE_NAMESPACE="vm"
Extract the initial keys the service account needs to use on the VMs.
$ kubectl -n $SERVICE_NAMESPACE get secret istio.default \ -o jsonpath='{.data.root-cert\.pem}' | base64 --decode > root-cert.pem $ kubectl -n $SERVICE_NAMESPACE get secret istio.default \ -o jsonpath='{.data.key\.pem}' | base64 --decode > key.pem $ kubectl -n $SERVICE_NAMESPACE get secret istio.default \ -o jsonpath='{.data.cert-chain\.pem}' | base64 --decode > cert-chain.pem
Determine and store the IP address of the Istio ingress gateway since the VMs access Citadel and Pilot and workloads on cluster through this IP address.
$ export GWIP=$(kubectl get -n istio-system service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') $ echo $GWIP 35.232.112.158
Generate a
cluster.env
configuration to deploy in the VMs. This file contains the Kubernetes cluster IP address ranges to intercept and redirect via Envoy.$ echo -e "ISTIO_CP_AUTH=MUTUAL_TLS\nISTIO_SERVICE_CIDR=$ISTIO_SERVICE_CIDR\n" > cluster.env
Check the contents of the generated
cluster.env
file. It should be similar to the following example:$ cat cluster.env ISTIO_CP_AUTH=MUTUAL_TLS ISTIO_SERVICE_CIDR=172.21.0.0/16
Setup DNS
Providing DNS resolution to allow services running on VM can access the
services running in the cluster. Istio itself does not use the DNS for
routing requests between services. Services local to a cluster share a
common DNS suffix(e.g., svc.cluster.local
). Kubernetes DNS provides
DNS resolution for these services.
To provide a similar setup to allow services accessible from VMs, you name
services in the clusters in the format
<name>.<namespace>.global
. Istio also ships with a CoreDNS server that
will provide DNS resolution for these services. In order to utilize this
DNS, Kubernetes’ DNS must be configured to stub a domain
for .global
.
Create one of the following ConfigMaps, or update an existing one, in each cluster that will be calling services in remote clusters (every cluster in the general case):
For clusters that use kube-dns
:
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{"global": ["$(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})"]}
EOF
For clusters that use CoreDNS:
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
global:53 {
errors
cache 30
proxy . $(kubectl get svc -n istio-system istiocoredns -o jsonpath={.spec.clusterIP})
}
EOF
Setting up the VM
Next, run the following commands on each machine that you want to add to the mesh:
Copy the previously created
cluster.env
and*.pem
files to the VM.Install the Debian package with the Envoy sidecar.
$ curl -L https://storage.googleapis.com/istio-release/releases/1.4.6/deb/istio-sidecar.deb > istio-sidecar.deb $ sudo dpkg -i istio-sidecar.deb
Add the IP address of the Istio gateway to
/etc/hosts
. Revisit the Customized installation of Istio on the Cluster section to learn how to obtain the IP address. The following example updates the/etc/hosts
file with the Istio gateway address:$ echo "35.232.112.158 istio-citadel istio-pilot istio-pilot.istio-system" | sudo tee -a /etc/hosts
Install
root-cert.pem
,key.pem
andcert-chain.pem
under/etc/certs/
.$ sudo mkdir -p /etc/certs $ sudo cp {root-cert.pem,cert-chain.pem,key.pem} /etc/certs
Install
cluster.env
under/var/lib/istio/envoy/
.$ sudo cp cluster.env /var/lib/istio/envoy
Transfer ownership of the files in
/etc/certs/
and/var/lib/istio/envoy/
to the Istio proxy.$ sudo chown -R istio-proxy /etc/certs /var/lib/istio/envoy
Verify the node agent works:
$ sudo node_agent .... CSR is approved successfully. Will renew cert in 1079h59m59.84568493s
Start Istio using
systemctl
.$ sudo systemctl start istio-auth-node-agent $ sudo systemctl start istio
Added Istio resources
The Istio resources below are added to support adding VMs to the mesh with gateways. These resources remove the flat network requirement between the VM and cluster.
Resource Kind | Resource Name | Function |
---|---|---|
configmap | coredns | Send *.global request to istiocordns service |
service | istiocoredns | Resolve *.global to Istio Ingress gateway |
gateway.networking.istio.io | meshexpansion-gateway | Open port for Pilot, Citadel and Mixer |
gateway.networking.istio.io | istio-multicluster-egressgateway | Open port 15443 for outbound *.global traffic |
gateway.networking.istio.io | istio-multicluster-ingressgateway | Open port 15443 for inbound *.global traffic |
envoyfilter.networking.istio.io | istio-multicluster-ingressgateway | Transform *.global to *. svc.cluster.local |
destinationrule.networking.istio.io | istio-multicluster-destinationrule | Set traffic policy for 15443 traffic |
destinationrule.networking.istio.io | meshexpansion-dr-pilot | Set traffic policy for istio-pilot |
destinationrule.networking.istio.io | istio-policy | Set traffic policy for istio-policy |
destinationrule.networking.istio.io | istio-telemetry | Set traffic policy for istio-telemetry |
virtualservice.networking.istio.io | meshexpansion-vs-pilot | Set route info for istio-pilot |
virtualservice.networking.istio.io | meshexpansion-vs-citadel | Set route info for istio-citadel |
Expose service running on cluster to VMs
Every service in the cluster that needs to be accessed from the VM requires a service entry configuration in the cluster. The host used in the service entry should be of the form <name>.<namespace>.global
where name and namespace correspond to the service’s name and namespace respectively.
To demonstrate access from VM to cluster services, configure the the httpbin service in the cluster.
Deploy the
httpbin
service in the cluster$ kubectl create namespace bar $ kubectl label namespace bar istio-injection=enabled $ kubectl apply -n bar -f @samples/httpbin/httpbin.yaml@
Create a service entry for the
httpbin
service in the cluster.To allow services in VM to access
httpbin
in the cluster, we need to create a service entry for it. The host name of the service entry should be of the form<name>.<namespace>.global
where name and namespace correspond to the remote service’s name and namespace respectively.For DNS resolution for services under the
*.global
domain, you need to assign these services an IP address.If the global services have actual VIPs, you can use those, but otherwise we suggest using IPs from the loopback range
127.0.0.0/8
that are not already allocated. These IPs are non-routable outside of a pod. In this example we’ll use IPs in127.255.0.0/16
which avoids conflicting with well known IPs such as127.0.0.1
(localhost
). Application traffic for these IPs will be captured by the sidecar and routed to the appropriate remote service.$ kubectl apply -n bar -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: httpbin.bar.forvms spec: hosts: # must be of form name.namespace.global - httpbin.bar.global location: MESH_INTERNAL ports: - name: http1 number: 8000 protocol: http resolution: DNS addresses: # the IP address to which httpbin.bar.global will resolve to # must be unique for each service, within a given cluster. # This address need not be routable. Traffic for this IP will be captured # by the sidecar and routed appropriately. # This address will also be added into VM's /etc/hosts - 127.255.0.3 endpoints: # This is the routable address of the ingress gateway in the cluster. # Traffic from the VMs will be # routed to this address. - address: ${CLUSTER_GW_ADDR} ports: http1: 15443 # Do not change this port value EOF
The configurations above will result in all traffic from VMs for
httpbin.bar.global
on any port to be routed to the endpoint<IPofClusterIngressGateway>:15443
over a mutual TLS connection.The gateway for port 15443 is a special SNI-aware Envoy preconfigured and installed as part of the meshexpansion with gateway Istio installation step in the Customized installation of Istio on the Cluster section. Traffic entering port 15443 will be load balanced among pods of the appropriate internal service of the target cluster (in this case,
httpbin.bar
in the cluster).
Send requests from VM to Kubernetes services
After setup, the machine can access services running in the Kubernetes cluster.
The following example shows accessing a service running in the Kubernetes
cluster from a VM using /etc/hosts/
, in this case using a
service from the httpbin service.
On the added VM, add the service name and address to its
/etc/hosts
file. You can then connect to the cluster service from the VM, as in the example below:$ echo "127.255.0.3 httpbin.bar.global" | sudo tee -a /etc/hosts $ curl -v httpbin.bar.global:8000 < HTTP/1.1 200 OK < server: envoy < content-type: text/html; charset=utf-8 < content-length: 9593 ... html content ...
The server: envoy
header indicates that the sidecar intercepted the traffic.
Running services on the added VM
Setup an HTTP server on the VM instance to serve HTTP traffic on port 8888:
$ python -m SimpleHTTPServer 8888
Determine the VM instance’s IP address.
Configure a service entry to enable service discovery for the VM. You can add VM services to the mesh using a service entry. Service entries let you manually add additional services to Pilot’s abstract model of the mesh. Once VM services are part of the mesh’s abstract model, other services can find and direct traffic to them. Each service entry configuration contains the IP addresses, ports, and appropriate labels of all VMs exposing a particular service, for example:
$ kubectl -n ${SERVICE_NAMESPACE} apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: ServiceEntry metadata: name: vmhttp spec: hosts: - vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local ports: - number: 8888 name: http protocol: HTTP resolution: STATIC endpoints: - address: ${VM_IP} ports: http: 8888 labels: app: vmhttp version: "v1" EOF
The workloads in a Kubernetes cluster need a DNS mapping to resolve the domain names of VM services. To integrate the mapping with your own DNS system, use
istioctl register
and creates a Kubernetesselector-less
service, for example:$ istioctl register -n ${SERVICE_NAMESPACE} vmhttp ${VM_IP} 8888
Deploy a pod running the
sleep
service in the Kubernetes cluster, and wait until it is ready:$ kubectl apply -f @samples/sleep/sleep.yaml@ $ kubectl get pod NAME READY STATUS RESTARTS AGE productpage-v1-8fcdcb496-xgkwg 2/2 Running 0 1d sleep-88ddbcfdd-rm42k 2/2 Running 0 1s ...
Send a request from the
sleep
service on the pod to the VM’s HTTP service:$ kubectl exec -it sleep-88ddbcfdd-rm42k -c sleep -- curl vmhttp.${SERVICE_NAMESPACE}.svc.cluster.local:8888
If configured properly, you will see something similar to the output below.
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 3.2 Final//EN"><html> <title>Directory listing for /</title> <body> <h2>Directory listing for /</h2> <hr> <ul> <li><a href=".bashrc">.bashrc</a></li> <li><a href=".ssh/">.ssh/</a></li> ... </body>
Congratulations! You successfully configured a service running in a pod within the cluster to send traffic to a service running on a VM outside of the cluster and tested that the configuration worked.
Cleanup
Run the following commands to remove the expansion VM from the mesh’s abstract model.
$ istioctl deregister -n ${SERVICE_NAMESPACE} vmhttp ${VM_IP}
2019-02-21T22:12:22.023775Z info Deregistered service successfull
$ kubectl delete ServiceEntry vmhttp -n ${SERVICE_NAMESPACE}
serviceentry.networking.istio.io "vmhttp" deleted