Service Entry
ServiceEntry
enables adding additional entries into Istio’s
internal service registry, so that auto-discovered services in the
mesh can access/route to these manually specified services. A
service entry describes the properties of a service (DNS name,
VIPs, ports, protocols, endpoints). These services could be
external to the mesh (e.g., web APIs) or mesh-internal services
that are not part of the platform’s service registry (e.g., a set
of VMs talking to services in Kubernetes). In addition, the
endpoints of a service entry can also be dynamically selected by
using the workloadSelector
field. These endpoints can be VM
workloads declared using the WorkloadEntry
object or Kubernetes
pods. The ability to select both pods and VMs under a single
service allows for migration of services from VMs to Kubernetes
without having to change the existing DNS names associated with the
services.
The following example declares a few external APIs accessed by internal applications over HTTPS. The sidecar inspects the SNI value in the ClientHello message to route to the appropriate external service.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-https
spec:
hosts:
- api.dropboxapi.com
- www.googleapis.com
- api.facebook.com
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: TLS
resolution: DNS
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-https
spec:
hosts:
- api.dropboxapi.com
- www.googleapis.com
- api.facebook.com
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: TLS
resolution: DNS
The following configuration adds a set of MongoDB instances running on unmanaged VMs to Istio’s registry, so that these services can be treated as any other service in the mesh. The associated DestinationRule is used to initiate mTLS connections to the database instances.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-mongocluster
spec:
hosts:
- mymongodb.somedomain # not used
addresses:
- 192.192.192.192/24 # VIPs
ports:
- number: 27018
name: mongodb
protocol: MONGO
location: MESH_INTERNAL
resolution: STATIC
endpoints:
- address: 2.2.2.2
- address: 3.3.3.3
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-mongocluster
spec:
hosts:
- mymongodb.somedomain # not used
addresses:
- 192.192.192.192/24 # VIPs
ports:
- number: 27018
name: mongodb
protocol: MONGO
location: MESH_INTERNAL
resolution: STATIC
endpoints:
- address: 2.2.2.2
- address: 3.3.3.3
and the associated DestinationRule
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: mtls-mongocluster
spec:
host: mymongodb.somedomain
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: mtls-mongocluster
spec:
host: mymongodb.somedomain
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
The following example uses a combination of service entry and TLS routing in a virtual service to steer traffic based on the SNI value to an internal egress firewall.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-redirect
spec:
hosts:
- wikipedia.org
- "*.wikipedia.org"
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: TLS
resolution: NONE
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-redirect
spec:
hosts:
- wikipedia.org
- "*.wikipedia.org"
location: MESH_EXTERNAL
ports:
- number: 443
name: https
protocol: TLS
resolution: NONE
And the associated VirtualService to route based on the SNI value.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: tls-routing
spec:
hosts:
- wikipedia.org
- "*.wikipedia.org"
tls:
- match:
- sniHosts:
- wikipedia.org
- "*.wikipedia.org"
route:
- destination:
host: internal-egress-firewall.ns1.svc.cluster.local
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: tls-routing
spec:
hosts:
- wikipedia.org
- "*.wikipedia.org"
tls:
- match:
- sniHosts:
- wikipedia.org
- "*.wikipedia.org"
route:
- destination:
host: internal-egress-firewall.ns1.svc.cluster.local
The virtual service with TLS match serves to override the default SNI match. In the absence of a virtual service, traffic will be forwarded to the wikipedia domains.
The following example demonstrates the use of a dedicated egress gateway through which all external service traffic is forwarded. The ‘exportTo’ field allows for control over the visibility of a service declaration to other namespaces in the mesh. By default, a service is exported to all namespaces. The following example restricts the visibility to the current namespace, represented by “.”, so that it cannot be used by other namespaces.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-httpbin
namespace : egress
spec:
hosts:
- example.com
exportTo:
- "."
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-httpbin
namespace : egress
spec:
hosts:
- example.com
exportTo:
- "."
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
Define a gateway to handle all egress traffic.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
namespace: istio-system
spec:
selector:
istio: egressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: istio-egressgateway
namespace: istio-system
spec:
selector:
istio: egressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
And the associated VirtualService
to route from the sidecar to the
gateway service (istio-egressgateway.istio-system.svc.cluster.local
), as
well as route from the gateway to the external service. Note that the
virtual service is exported to all namespaces enabling them to route traffic
through the gateway to the external service. Forcing traffic to go through
a managed middle proxy like this is a common practice.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: gateway-routing
namespace: egress
spec:
hosts:
- example.com
exportTo:
- "*"
gateways:
- mesh
- istio-egressgateway
http:
- match:
- port: 80
gateways:
- mesh
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
- match:
- port: 80
gateways:
- istio-egressgateway
route:
- destination:
host: example.com
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: gateway-routing
namespace: egress
spec:
hosts:
- example.com
exportTo:
- "*"
gateways:
- mesh
- istio-egressgateway
http:
- match:
- port: 80
gateways:
- mesh
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
- match:
- port: 80
gateways:
- istio-egressgateway
route:
- destination:
host: example.com
The following example demonstrates the use of wildcards in the hosts for
external services. If the connection has to be routed to the IP address
requested by the application (i.e. application resolves DNS and attempts
to connect to a specific IP), the discovery mode must be set to NONE
.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-wildcard-example
spec:
hosts:
- "*.bar.com"
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: NONE
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-wildcard-example
spec:
hosts:
- "*.bar.com"
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: NONE
The following example demonstrates a service that is available via a Unix Domain Socket on the host of the client. The resolution must be set to STATIC to use Unix address endpoints.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: unix-domain-socket-example
spec:
hosts:
- "example.unix.local"
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: STATIC
endpoints:
- address: unix:///var/run/example/socket
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: unix-domain-socket-example
spec:
hosts:
- "example.unix.local"
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: STATIC
endpoints:
- address: unix:///var/run/example/socket
For HTTP-based services, it is possible to create a VirtualService
backed by multiple DNS addressable endpoints. In such a scenario, the
application can use the HTTP_PROXY
environment variable to transparently
reroute API calls for the VirtualService
to a chosen backend. For
example, the following configuration creates a non-existent external
service called foo.bar.com backed by three domains: us.foo.bar.com:8080,
uk.foo.bar.com:9080, and in.foo.bar.com:7080
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-svc-dns
spec:
hosts:
- foo.bar.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
endpoints:
- address: us.foo.bar.com
ports:
http: 8080
- address: uk.foo.bar.com
ports:
http: 9080
- address: in.foo.bar.com
ports:
http: 7080
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc-dns
spec:
hosts:
- foo.bar.com
location: MESH_EXTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
endpoints:
- address: us.foo.bar.com
ports:
http: 8080
- address: uk.foo.bar.com
ports:
http: 9080
- address: in.foo.bar.com
ports:
http: 7080
With HTTP_PROXY=http://localhost/
, calls from the application to
http://foo.bar.com
will be load balanced across the three domains
specified above. In other words, a call to http://foo.bar.com/baz
would
be translated to http://uk.foo.bar.com/baz
.
The following example illustrates the usage of a ServiceEntry
containing a subject alternate name
whose format conforms to the SPIFFE standard:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: httpbin
namespace : httpbin-ns
spec:
hosts:
- example.com
location: MESH_INTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: STATIC
endpoints:
- address: 2.2.2.2
- address: 3.3.3.3
subjectAltNames:
- "spiffe://cluster.local/ns/httpbin-ns/sa/httpbin-service-account"
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: httpbin
namespace : httpbin-ns
spec:
hosts:
- example.com
location: MESH_INTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: STATIC
endpoints:
- address: 2.2.2.2
- address: 3.3.3.3
subjectAltNames:
- "spiffe://cluster.local/ns/httpbin-ns/sa/httpbin-service-account"
The following example demonstrates the use of ServiceEntry
with a
workloadSelector
to handle the migration of a service
details.bookinfo.com
from VMs to Kubernetes. The service has two
VM-based instances with sidecars as well as a set of Kubernetes
pods managed by a standard deployment object. Consumers of this
service in the mesh will be automatically load balanced across the
VMs and Kubernetes. VM for the details.bookinfo.com
service. This VM has sidecar installed and bootstrapped using the
details-legacy
service account. The sidecar receives HTTP traffic
on port 80 (wrapped in istio mutual TLS) and forwards it to the
application on the localhost on the same port.
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadEntry
metadata:
name: details-vm-1
spec:
serviceAccount: details
address: 2.2.2.2
labels:
app: details
instance-id: vm1
---
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadEntry
metadata:
name: details-vm-2
spec:
serviceAccount: details
address: 3.3.3.3
labels:
app: details
instance-id: vm2
apiVersion: networking.istio.io/v1beta1
kind: WorkloadEntry
metadata:
name: details-vm-1
spec:
serviceAccount: details
address: 2.2.2.2
labels:
app: details
instance-id: vm1
---
apiVersion: networking.istio.io/v1beta1
kind: WorkloadEntry
metadata:
name: details-vm-2
spec:
serviceAccount: details
address: 3.3.3.3
labels:
app: details
instance-id: vm2
Assuming there is also a Kubernetes deployment with pod labels
app: details
using the same service account details
, the
following service entry declares a service spanning both VMs and
Kubernetes:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: details-svc
spec:
hosts:
- details.bookinfo.com
location: MESH_INTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: STATIC
workloadSelector:
labels:
app: details
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: details-svc
spec:
hosts:
- details.bookinfo.com
location: MESH_INTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: STATIC
workloadSelector:
labels:
app: details
ServiceEntry
ServiceEntry enables adding additional entries into Istio’s internal service registry.
ServiceEntry.Location
Location specifies whether the service is part of Istio mesh or outside the mesh. Location determines the behavior of several features, such as service-to-service mTLS authentication, policy enforcement, etc. When communicating with services outside the mesh, Istio’s mTLS authentication is disabled, and policy enforcement is performed on the client-side as opposed to server-side.
Name | Description |
---|---|
MESH_EXTERNAL | Signifies that the service is external to the mesh. Typically used to indicate external services consumed through APIs. |
MESH_INTERNAL | Signifies that the service is part of the mesh. Typically used to indicate services added explicitly as part of expanding the service mesh to include unmanaged infrastructure (e.g., VMs added to a Kubernetes based service mesh). |
ServiceEntry.Resolution
Resolution determines how the proxy will resolve the IP addresses of the network endpoints associated with the service, so that it can route to one of them. The resolution mode specified here has no impact on how the application resolves the IP address associated with the service. The application may still have to use DNS to resolve the service to an IP so that the outbound traffic can be captured by the Proxy. Alternatively, for HTTP services, the application could directly communicate with the proxy (e.g., by setting HTTP_PROXY) to talk to these services.
Name | Description |
---|---|
NONE | Assume that incoming connections have already been resolved (to a specific destination IP address). Such connections are typically routed via the proxy using mechanisms such as IP table REDIRECT/ eBPF. After performing any routing related transformations, the proxy will forward the connection to the IP address to which the connection was bound. |
STATIC | Use the static IP addresses specified in endpoints (see below) as the backing instances associated with the service. |
DNS | Attempt to resolve the IP address by querying the ambient DNS, asynchronously. If no endpoints are specified, the proxy will resolve the DNS address specified in the hosts field, if wildcards are not used. If endpoints are specified, the DNS addresses specified in the endpoints will be resolved to determine the destination IP address. DNS resolution cannot be used with Unix domain socket endpoints. |
DNS_ROUND_ROBIN | Attempt to resolve the IP address by querying the ambient DNS, asynchronously. Unlike DNS, DNSROUNDROBIN only uses the first IP address returned when a new connection needs to be initiated without relying on complete results of DNS resolution and connections made to hosts will be retained even if DNS records change frequently eliminating draining connection pools and connection cycling. This is best suited for large web scale services that must be accessed via DNS. The proxy will resolve the DNS address specified in the hosts field, if wildcards are not used. DNS resolution cannot be used with Unix domain socket endpoints. |