Destination Rule
DestinationRule
defines policies that apply to traffic intended for a
service after routing has occurred. These rules specify configuration
for load balancing, connection pool size from the sidecar, and outlier
detection settings to detect and evict unhealthy hosts from the load
balancing pool. For example, a simple load balancing policy for the
ratings service would look as follows:
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_REQUEST
Version specific policies can be specified by defining a named
subset
and overriding the settings specified at the service level. The
following rule uses a round robin load balancing policy for all traffic
going to a subset named testversion that is composed of endpoints (e.g.,
pods) with labels (version:v3).
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_REQUEST
subsets:
- name: testversion
labels:
version: v3
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Note: Policies specified for subsets will not take effect until a route rule explicitly sends traffic to this subset.
Traffic policies can be customized to specific ports as well. The following rule uses the least connection load balancing policy for all traffic to port 80, while uses a round robin load balancing setting for traffic to the port 9080.
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: bookinfo-ratings-port
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy: # Apply to all ports
portLevelSettings:
- port:
number: 80
loadBalancer:
simple: LEAST_REQUEST
- port:
number: 9080
loadBalancer:
simple: ROUND_ROBIN
Destination Rules can be customized to specific workloads as well. The following example shows how a destination rule can be applied to a specific workload using the workloadSelector configuration.
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: configure-client-mtls-dr-with-workloadselector
spec:
host: example.com
workloadSelector:
matchLabels:
app: ratings
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
portLevelSettings:
- port:
number: 31443
tls:
credentialName: client-credential
mode: MUTUAL
DestinationRule
DestinationRule defines policies that apply to traffic intended for a service after routing has occurred.
TrafficPolicy
Traffic policies to apply for a specific destination, across all destination ports. See DestinationRule for examples.
PortTrafficPolicy
Traffic policies that apply to specific ports of the service
TunnelSettings
ProxyProtocol
VERSION
Name | Description |
---|---|
V1 | PROXY protocol version 1. Human readable format. |
V2 | PROXY protocol version 2. Binary format. |
Subset
A subset of endpoints of a service. Subsets can be used for scenarios like A/B testing, or routing to a specific version of a service. Refer to VirtualService documentation for examples of using subsets in these scenarios. In addition, traffic policies defined at the service-level can be overridden at a subset-level. The following rule uses a round robin load balancing policy for all traffic going to a subset named testversion that is composed of endpoints (e.g., pods) with labels (version:v3).
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: LEAST_REQUEST
subsets:
- name: testversion
labels:
version: v3
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Note: Policies specified for subsets will not take effect until a route rule explicitly sends traffic to this subset.
One or more labels are typically required to identify the subset destination, however, when the corresponding DestinationRule represents a host that supports multiple SNI hosts (e.g., an egress gateway), a subset without labels may be meaningful. In this case a traffic policy with ClientTLSSettings can be used to identify a specific SNI host corresponding to the named subset.
LoadBalancerSettings
Load balancing policies to apply for a specific destination. See Envoy’s load balancing documentation for more details.
For example, the following rule uses a round robin load balancing policy for all traffic going to the ratings service.
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
The following example sets up sticky sessions for the ratings service hashing-based load balancer for the same ratings service using the the User cookie as the hash key.
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: user
ttl: 0s
ConsistentHashLB
Consistent Hash-based load balancing can be used to provide soft session affinity based on HTTP headers, cookies or other properties. The affinity to a particular destination host may be lost when one or more hosts are added/removed from the destination service.
Note: consistent hashing is less reliable at maintaining affinity than common
“sticky sessions” implementations, which often encode a specific destination in
a cookie, ensuring affinity is maintained as long as the backend remains.
With consistent hash, the guarantees are weaker; any host addition or removal can
break affinity for 1/backends
requests.
Warning: consistent hashing depends on each proxy having a consistent view of endpoints. This is not the case when locality load balancing is enabled. Locality load balancing and consistent hash will only work together when all proxies are in the same locality, or a high level load balancer handles locality affinity.
RingHash
MagLev
HTTPCookie
Describes a HTTP cookie that will be used as the hash key for the Consistent Hash load balancer.
SimpleLB
Standard load balancing algorithms that require no tuning.
Name | Description |
---|---|
UNSPECIFIED | No load balancing algorithm has been specified by the user. Istio will select an appropriate default. |
RANDOM | The random load balancer selects a random healthy host. The random load balancer generally performs better than round robin if no health checking policy is configured. |
PASSTHROUGH | This option will forward the connection to the original IP address requested by the caller without doing any form of load balancing. This option must be used with care. It is meant for advanced use cases. Refer to Original Destination load balancer in Envoy for further details. |
ROUND_ROBIN | A basic round robin load balancing policy. This is generally unsafe for many scenarios (e.g. when endpoint weighting is used) as it can overburden endpoints. In general, prefer to use LEAST_REQUEST as a drop-in replacement for ROUND_ROBIN. |
LEAST_REQUEST | The least request load balancer spreads load across endpoints, favoring endpoints with the least outstanding requests. This is generally safer and outperforms ROUND_ROBIN in nearly all cases. Prefer to use LEAST_REQUEST as a drop-in replacement for ROUND_ROBIN. |
LEAST_CONN | Deprecated. Use LEAST_REQUEST instead. |
WarmupConfiguration
ConnectionPoolSettings
Connection pool settings for an upstream host. The settings apply to each individual host in the upstream service. See Envoy’s circuit breaker for more details. Connection pool settings can be applied at the TCP level as well as at HTTP level.
For example, the following rule sets a limit of 100 connections to redis service called myredissrv with a connect timeout of 30ms
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: bookinfo-redis
spec:
host: myredissrv.prod.svc.cluster.local
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
connectTimeout: 30ms
tcpKeepalive:
time: 7200s
interval: 75s
TCPSettings
Settings common to both HTTP and TCP upstream connections.
TcpKeepalive
TCP keepalive.
HTTPSettings
Settings applicable to HTTP1.1/HTTP2/GRPC connections.
H2UpgradePolicy
Policy for upgrading http1.1 connections to http2.
Name | Description |
---|---|
DEFAULT | Use the global default. |
DO_NOT_UPGRADE | Do not upgrade the connection to http2. This opt-out option overrides the default. |
UPGRADE | Upgrade the connection to http2. This opt-in option overrides the default. |
OutlierDetection
A Circuit breaker implementation that tracks the status of each individual host in the upstream service. Applicable to both HTTP and TCP services. For HTTP services, hosts that continually return 5xx errors for API calls are ejected from the pool for a pre-defined period of time. For TCP services, connection timeouts or connection failures to a given host counts as an error when measuring the consecutive errors metric. See Envoy’s outlier detection for more details.
The following rule sets a connection pool size of 100 HTTP1 connections with no more than 10 req/connection to the “reviews” service. In addition, it sets a limit of 1000 concurrent HTTP2 requests and configures upstream hosts to be scanned every 5 mins so that any host that fails 7 consecutive times with a 502, 503, or 504 error code will be ejected for 15 minutes.
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: reviews-cb-policy
spec:
host: reviews.prod.svc.cluster.local
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http2MaxRequests: 1000
maxRequestsPerConnection: 10
outlierDetection:
consecutive5xxErrors: 7
interval: 5m
baseEjectionTime: 15m
ClientTLSSettings
SSL/TLS related settings for upstream connections. See Envoy’s TLS context for more details. These settings are common to both HTTP and TCP upstreams.
For example, the following rule configures a client to use mutual TLS for connections to upstream database cluster.
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: db-mtls
spec:
host: mydbserver.prod.svc.cluster.local
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/myclientcert.pem
privateKey: /etc/certs/client_private_key.pem
caCertificates: /etc/certs/rootcacerts.pem
The following rule configures a client to use TLS when talking to a foreign service whose domain matches *.foo.com.
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: tls-foo
spec:
host: "*.foo.com"
trafficPolicy:
tls:
mode: SIMPLE
The following rule configures a client to use Istio mutual TLS when talking to rating services.
apiVersion: networking.istio.io/v1
kind: DestinationRule
metadata:
name: ratings-istio-mtls
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
TLSmode
TLS connection mode
Name | Description |
---|---|
DISABLE | Do not setup a TLS connection to the upstream endpoint. |
SIMPLE | Originate a TLS connection to the upstream endpoint. |
MUTUAL | Secure connections to the upstream using mutual TLS by presenting client certificates for authentication. |
ISTIO_MUTUAL | Secure connections to the upstream using mutual TLS by presenting
client certificates for authentication.
Compared to Mutual mode, this mode uses certificates generated
automatically by Istio for mTLS authentication. When this mode is
used, all other fields in |
LocalityLoadBalancerSetting
Locality-weighted load balancing allows administrators to control the distribution of traffic to endpoints based on the localities of where the traffic originates and where it will terminate. These localities are specified using arbitrary labels that designate a hierarchy of localities in {region}/{zone}/{sub-zone} form. For additional detail refer to Locality Weight The following example shows how to setup locality weights mesh-wide.
Given a mesh with workloads and their service deployed to “us-west/zone1/*” and “us-west/zone2/*”. This example specifies that when traffic accessing a service originates from workloads in “us-west/zone1/*”, 80% of the traffic will be sent to endpoints in “us-west/zone1/*”, i.e the same zone, and the remaining 20% will go to endpoints in “us-west/zone2/*”. This setup is intended to favor routing traffic to endpoints in the same locality. A similar setting is specified for traffic originating in “us-west/zone2/*”.
distribute:
- from: us-west/zone1/*
to:
"us-west/zone1/*": 80
"us-west/zone2/*": 20
- from: us-west/zone2/*
to:
"us-west/zone1/*": 20
"us-west/zone2/*": 80
If the goal of the operator is not to distribute load across zones and regions but rather to restrict the regionality of failover to meet other operational requirements an operator can set a ‘failover’ policy instead of a ‘distribute’ policy.
The following example sets up a locality failover policy for regions. Assume a service resides in zones within us-east, us-west & eu-west this example specifies that when endpoints within us-east become unhealthy traffic should failover to endpoints in any zone or sub-zone within eu-west and similarly us-west should failover to us-east.
failover:
- from: us-east
to: eu-west
- from: us-west
to: us-east
Distribute
Describes how traffic originating in the ‘from’ zone or sub-zone is distributed over a set of ’to’ zones. Syntax for specifying a zone is {region}/{zone}/{sub-zone} and terminal wildcards are allowed on any segment of the specification. Examples:
*
- matches all localities
us-west/*
- all zones and sub-zones within the us-west region
us-west/zone-1/*
- all sub-zones within us-west/zone-1
Failover
Specify the traffic failover policy across regions. Since zone and sub-zone failover is supported by default this only needs to be specified for regions when the operator needs to constrain traffic failover so that the default behavior of failing over to any endpoint globally does not apply. This is useful when failing over traffic across regions would not improve service health or may need to be restricted for other reasons like regulatory controls.
UInt32Value
Wrapper message for uint32
.
The JSON representation for UInt32Value
is JSON number.