Platform-Specific Prerequisites
This document covers any platform- or environment-specific prerequisites for installing Istio in ambient mode.
Platform
Certain Kubernetes environments require you to set various Istio configuration options to support them.
Google Kubernetes Engine (GKE)
On GKE, Istio components with the system-node-critical priorityClassName
can only be installed in namespaces that have a ResourceQuota defined. By default in GKE, only kube-system
has a defined ResourceQuota for the node-critical
class. The Istio CNI node agent and ztunnel
both require the node-critical
class, and so in GKE, both components must either:
- Be installed into
kube-system
(notistio-system
) - Be installed into another namespace (such as
istio-system
) in which a ResourceQuota has been manually created, for example:
apiVersion: v1
kind: ResourceQuota
metadata:
name: gcp-critical-pods
namespace: istio-system
spec:
hard:
pods: 1000
scopeSelector:
matchExpressions:
- operator: In
scopeName: PriorityClass
values:
- system-node-critical
k3d
If you are using k3d with the default Flannel CNI, you must append some values to your installation command, as k3d uses nonstandard locations for CNI configuration and binaries.
Create a cluster with Traefik disabled so it doesn’t conflict with Istio’s ingress gateways:
$ k3d cluster create --api-port 6550 -p '9080:80@loadbalancer' -p '9443:443@loadbalancer' --agents 2 --k3s-arg '--disable=traefik@server:*'
Set the
cniConfDir
andcniBinDir
values when installing Istio. For example:$ helm install istio-cni istio/cni -n istio-system --set profile=ambient --wait --set cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set cniBinDir=/bin
$ istioctl install --set profile=ambient --set values.cni.cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set values.cni.cniBinDir=/bin
K3s
When using K3s and one of its bundled CNIs, you must append some values to your installation command, as K3S uses nonstandard locations for CNI configuration and binaries. These nonstandard locations may be overridden as well, according to K3s documentation. If you are using K3s with a custom, non-bundled CNI, you must use the correct paths for those CNIs, e.g. /etc/cni/net.d
- see the K3s docs for details. For example:
$ helm install istio-cni istio/cni -n istio-system --set profile=ambient --wait --set cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set cniBinDir=/var/lib/rancher/k3s/data/current/bin/
$ istioctl install --set profile=ambient --set values.cni.cniConfDir=/var/lib/rancher/k3s/agent/etc/cni/net.d --set values.cni.cniBinDir=/var/lib/rancher/k3s/data/current/bin/
MicroK8s
If you are installing Istio on MicroK8s, you must append a value to your installation command, as MicroK8s uses non-standard locations for CNI configuration and binaries. For example:
$ helm install istio-cni istio/cni -n istio-system --set profile=ambient --wait --set cniConfDir=/var/snap/microk8s/current/args/cni-network --set cniBinDir=/var/snap/microk8s/current/opt/cni/bin
$ istioctl install --set profile=ambient --set values.cni.cniConfDir=/var/snap/microk8s/current/args/cni-network --set values.cni.cniBinDir=/var/snap/microk8s/current/opt/cni/bin
minikube
If you are using minikube with the Docker driver, you must append some values to your installation command so that the Istio CNI node agent can correctly manage and capture pods on the node. For example:
$ helm install istio-cni istio/cni -n istio-system --set profile=ambient --wait --set cniNetnsDir="/var/run/docker/netns"
$ istioctl install --set profile=ambient --set cni.cniNetnsDir="/var/run/docker/netns"
Red Hat OpenShift
OpenShift requires that ztunnel
and istio-cni
components are installed in the kube-system
namespace. An openshift-ambient
installation profile is provided which will make this change for you. Replace instances of profile=ambient
with profile=openshift-ambient
in the installation commands. For example:
$ helm install istio-cni istio/cni -n istio-system --set profile=openshift-ambient --wait
$ istioctl install --set profile=openshift-ambient --skip-confirmation
CNI plugins
The following configurations apply to all platforms, when certain CNI plugins are used:
Cilium
Cilium currently defaults to proactively deleting other CNI plugins and their config, and must be configured with
cni.exclusive = false
to properly support chaining. See the Cilium documentation for more details.Cilium’s BPF masquerading is currently disabled by default, and has issues with Istio’s use of link-local IPs for Kubernetes health checking. Enabling BPF masquerading via
bpf.masquerade=true
is not currently supported, and results in non-functional pod health checks in Istio ambient. Cilium’s default iptables masquerading implementation should continue to function correctly.Due to how Cilium manages node identity and internally allow-lists node-level health probes to pods, applying default-DENY
NetworkPolicy
in a Cilium CNI install underlying Istio in ambient mode, will causekubelet
health probes (which are by-default exempted from NetworkPolicy enforcement by Cilium) to be blocked.This can be resolved by applying the following
CiliumClusterWideNetworkPolicy
:apiVersion: "cilium.io/v2" kind: CiliumClusterwideNetworkPolicy metadata: name: "allow-ambient-hostprobes" spec: description: "Allows SNAT-ed kubelet health check probes into ambient pods" endpointSelector: {} ingress: - fromCIDR: - "169.254.7.127/32"
Please see issue #49277 and CiliumClusterWideNetworkPolicy for more details.