Upgrade Steps
Follow this flow to upgrade an existing Istio deployment, including both the control plane and the sidecar proxies, to a new release of Istio. The upgrade process may install new binaries and may change configuration and API schemas. The upgrade process may result in service downtime. To minimize downtime, please ensure your Istio control plane components and your applications are highly available with multiple replicas (as multi-replica Citadel is still under development, Citadel should be deployed with one replica).
This flow assumes that the Istio components are installed and upgraded in the
istio-system
namespace.
Upgrade steps
Download the new Istio release and change directory to the new release directory.
Istio CNI upgrade
If you have installed or are planning to install Istio CNI, choose one of the following mutually exclusive options to check whether Istio CNI is already installed and to upgrade it:
You can use Kubernetes’ rolling update mechanism to upgrade the Istio CNI components.
This is suitable for cases where kubectl apply
was used to deploy Istio CNI.
To check whether
istio-cni
is installed, search foristio-cni-node
pods and in which namespace they are running (typically,kube-system
oristio-system
):$ kubectl get pods -l k8s-app=istio-cni-node --all-namespaces $ NAMESPACE=$(kubectl get pods -l k8s-app=istio-cni-node --all-namespaces --output='jsonpath={.items[0].metadata.namespace}')
If
istio-cni
is currently installed in a namespace other thankube-system
(for example,istio-system
), deleteistio-cni
:$ helm template install/kubernetes/helm/istio-cni --name=istio-cni --namespace=$NAMESPACE | kubectl delete -f -
Install or upgrade
istio-cni
in thekube-system
namespace:$ helm template install/kubernetes/helm/istio-cni --name=istio-cni --namespace=kube-system | kubectl apply -f -
If you installed Istio CNI using Helm and Tiller, the preferred upgrade option is to let Helm take care of the upgrade.
Check whether
istio-cni
is installed, and in which namespace:$ helm status istio-cni
(Re-)install or upgrade
istio-cni
depending on the status:If
istio-cni
is not currently installed and you decide to install it:$ helm install install/kubernetes/helm/istio-cni --name istio-cni --namespace kube-system
If
istio-cni
is currently installed in a namespace other thankube-system
(for example,istio-system
), delete it:$ helm delete --purge istio-cni
Then install it again in the
kube-system
namespace:$ helm install install/kubernetes/helm/istio-cni --name istio-cni --namespace kube-system
If
istio-cni
is currently installed in thekube-system
namespace, upgrade it:$ helm upgrade istio-cni install/kubernetes/helm/istio-cni --namespace kube-system
Control plane upgrade
The Istio control plane components include: Citadel, Ingress gateway, Egress gateway, Pilot, Galley, Policy, Telemetry and Sidecar injector. Choose one of the following two mutually exclusive options to update the control plane:
You can use Kubernetes’ rolling update mechanism to upgrade the control plane components.
This is suitable for cases where kubectl apply
was used to deploy the Istio components,
including configurations generated using
helm template.
Use
kubectl apply
to upgrade all the Istio’s CRDs. Wait a few seconds for the Kubernetes API server to receive the upgraded CRDs:$ for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
Add Istio’s core components to a Kubernetes manifest file, for example.
$ helm template install/kubernetes/helm/istio --name istio \ --namespace istio-system > $HOME/istio.yaml
If you want to enable global mutual TLS, set
global.mtls.enabled
andglobal.controlPlaneSecurityEnabled
totrue
for the last command:$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system \ --set global.mtls.enabled=true --set global.controlPlaneSecurityEnabled=true > $HOME/istio-auth.yaml
If Istio CNI is installed, enable it by adding the
--set istio_cni.enabled=true
setting.Upgrade the Istio control plane components via the manifest, for example:
$ kubectl apply -f $HOME/istio.yaml
or
$ kubectl apply -f $HOME/istio-auth.yaml
The rolling update process will upgrade all deployments and configmaps to the new version. After this process finishes, your Istio control plane should be updated to the new version. Your existing application should continue to work without any change. If there is any critical issue with the new control plane, you can rollback the changes by applying the yaml files from the old version.
If you installed Istio using Helm and Tiller, the preferred upgrade option is to let Helm take care of the upgrade.
Upgrade the
istio-init
chart to update all the Istio Custom Resource Definitions (CRDs).$ helm upgrade --install --force istio-init install/kubernetes/helm/istio-init --namespace istio-system
Check that all the CRD creation jobs completed successfully to verify that the Kubernetes API server received all the CRDs:
$ kubectl get job --namespace istio-system | grep istio-init-crd
Upgrade the
istio
chart:$ helm upgrade istio install/kubernetes/helm/istio --namespace istio-system
If Istio CNI is installed, enable it by adding the
--set istio_cni.enabled=true
setting.
Sidecar upgrade
After the control plane upgrade, the applications already running Istio will still be using an older sidecar. To upgrade the sidecar, you will need to re-inject it.
If you’re using automatic sidecar injection, you can upgrade the sidecar by doing a rolling update for all the pods, so that the new version of the sidecar will be automatically re-injected. There are some tricks to reload all pods. E.g. There is a sample bash script which triggers the rolling update by patching the grace termination period.
$ ./upgrade-sidecar.sh $namespace
If you’re using manual injection, you can upgrade the sidecar by executing:
$ kubectl apply -f <(istioctl kube-inject -f $ORIGINAL_DEPLOYMENT_YAML)
If the sidecar was previously injected with some customized inject configuration files, you will need to change the version tag in the configuration files to the new version and re-inject the sidecar as follows:
$ kubectl apply -f <(istioctl kube-inject \
--injectConfigFile inject-config.yaml \
--filename $ORIGINAL_DEPLOYMENT_YAML)
Migrating per-service mutual TLS enablement via annotations to authentication policy
If you use service annotations to override global mutual TLS enablement for a service, you need to replace it with authentication policy and destination rules.
For example, if you install Istio with mutual TLS enabled, and disable it for service foo
using a service annotation like below:
kind: Service
metadata:
name: foo
namespace: bar
annotations:
auth.istio.io/8000: NONE
You need to replace this with this authentication policy and destination rule (deleting the old annotation is optional)
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "disable-mTLS-foo"
namespace: bar
spec:
targets:
- name: foo
ports:
- number: 8000
peers:
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "disable-mTLS-foo"
namespace: "bar"
spec:
host: "foo"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
portLevelSettings:
- port:
number: 8000
tls:
mode: DISABLE
If you already have destination rules for foo
, you must edit that rule instead of creating a new one.
When create a new destination rule, make sure to include other settings, i.e load balancer
, connection pool
and outlier detection
if necessary.
Finally, If foo
doesn’t have sidecar, you can skip authentication policy, but still need to add destination rule.
If 8000 is the only port that service foo
provides (or you want to disable mutual TLS for all ports), the policies can be simplified as:
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "disable-mTLS-foo"
namespace: bar
spec:
targets:
- name: foo
peers:
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "disable-mTLS-foo"
namespace: "bar"
spec:
host: "foo"
trafficPolicy:
tls:
mode: DISABLE
Migrating the mtls_excluded_services
configuration to destination rules
If you installed Istio with mutual TLS enabled, and used the mesh configuration option mtls_excluded_services
to
disable mutual TLS when connecting to these services (e.g Kubernetes API server), you need to replace this by adding a destination rule. For example:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: "kubernetes-master"
namespace: "default"
spec:
host: "kubernetes.default.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
Migrating from RbacConfig
to ClusterRbacConfig
The RbacConfig
is deprecated due to a bug. You must
migrate to ClusterRbacConfig
if you are currently using RbacConfig
. The bug reduces the scope of
the object to be namespace-scoped in some cases. The ClusterRbacConfig
follows the exact same
specification as the RbacConfig
but with the correct cluster scope implementation.
To automate the migration, we developed the convert_RbacConfig_to_ClusterRbacConfig.sh
script.
The script is included in the Istio installation package.
Download and run the script with the following command:
$ curl -L https://raw.githubusercontent.com/istio/istio/release-1.2/tools/convert_RbacConfig_to_ClusterRbacConfig.sh | sh -
The script automates the following operations:
The script creates the cluster RBAC configuration with same specification as the existing RBAC configuration because Kubernetes doesn’t allow the value of
kind:
in a custom resource to change after it’s created.For example, if you have the following RBAC configuration:
apiVersion: "rbac.istio.io/v1alpha1" kind: RbacConfig metadata: name: default spec: mode: 'ON_WITH_INCLUSION' inclusion: namespaces: ["default"]
The script creates the following cluster RBAC configuration:
apiVersion: "rbac.istio.io/v1alpha1" kind: ClusterRbacConfig metadata: name: default spec: mode: 'ON_WITH_INCLUSION' inclusion: namespaces: ["default"]
The script applies the configuration and waits for a few seconds to let the configuration to take effect.
The script deletes the previous RBAC configuration custom resource after applying the cluster RBAC configuration successfully.