Istio Operator Install

Instead of manually installing, upgrading, and uninstalling Istio in a production environment, you can instead let the Istio operator manage the installation for you. This relieves you of the burden of managing different istioctl versions. Simply update the operator custom resource (CR) and the operator controller will apply the corresponding configuration changes for you.

The same IstioOperator API is used to install Istio with the operator as when using the istioctl install instructions. In both cases, configuration is validated against a schema and the same correctness checks are performed.


  1. Perform any necessary platform-specific setup.

  2. Check the Requirements for Pods and Services.

  3. Install the istioctl command.

  4. Deploy the Istio operator:

    $ istioctl operator init

    This command runs the operator by creating the following resources in the istio-operator namespace:

    • The operator custom resource definition
    • The operator controller deployment
    • A service to access operator metrics
    • Necessary Istio operator RBAC rules

    You can configure which namespace the operator controller is installed in, the namespace(s) the operator watches, the installed Istio image sources and versions, and more. For example, you can pass one or more namespaces to watch using the --watchedNamespaces flag:

    $ istioctl operator init --watchedNamespaces=istio-namespace1,istio-namespace2

    See the istioctl operator init command reference for details.


To install the Istio demo configuration profile using the operator, run the following command:

$ kubectl create ns istio-system
$ kubectl apply -f - <<EOF
kind: IstioOperator
  namespace: istio-system
  name: example-istiocontrolplane
  profile: demo

The controller will detect the IstioOperator resource and then install the Istio components corresponding to the specified (demo) configuration.

The Istio control plane (istiod) will be installed in the istio-system namespace by default. To install it in a different location, specify the namespace using the field as follows:

kind: IstioOperator
  profile: demo
      istioNamespace: istio-namespace1

You can confirm the Istio control plane services have been deployed with the following commands:

$ kubectl get svc -n istio-system
NAME                        TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                      AGE
istio-egressgateway         ClusterIP   <none>        80/TCP,443/TCP,15443/TCP                                                     17s
istio-ingressgateway        LoadBalancer   <pending>     15020:31077/TCP,80:30689/TCP,443:32419/TCP,31400:31411/TCP,15443:30176/TCP   17s
istiod                      ClusterIP    <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP,53/UDP,853/TCP                         30s                                                              13s
$ kubectl get pods -n istio-system
NAME                                   READY   STATUS    RESTARTS   AGE
istio-egressgateway-5444c68db8-9h6dz   1/1     Running   0          87s
istio-ingressgateway-5c68cb968-x7qv9   1/1     Running   0          87s
istiod-598984548d-wjq9j                1/1     Running   0          99s


Now, with the controller running, you can change the Istio configuration by editing or replacing the IstioOperator resource. The controller will detect the change and respond by updating the Istio installation correspondingly.

For example, you can switch the installation to the default profile with the following command:

$ kubectl apply -f - <<EOF
kind: IstioOperator
  namespace: istio-system
  name: example-istiocontrolplane
  profile: default

You can also enable or disable components and modify resource settings. For example, to enable the istio-egressgateway component and increase pilot memory requests:

$ kubectl apply -f - <<EOF
kind: IstioOperator
  namespace: istio-system
  name: example-istiocontrolplane
  profile: default
            memory: 3072Mi
    - name: istio-egressgateway
      enabled: true

You can observe the changes that the controller makes in the cluster in response to IstioOperator CR updates by checking the operator controller logs:

$ kubectl logs -f -n istio-operator $(kubectl get pods -n istio-operator -lname=istio-operator -o jsonpath='{.items[0]}')

Refer to the IstioOperator API for the complete set of configuration settings.

In-place Upgrade

Download and extract the istioctl corresponding to the version of Istio you wish to upgrade to. Reinstall the operator at the target Istio version:

$ <extracted-dir>/bin/istioctl operator init

You should see that the istio-operator pod has restarted and its version has changed to the target version:

$ kubectl get pods --namespace istio-operator \
  -o=jsonpath='{range .items[*]}{}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{"\n"}{end}'

After a minute or two, the Istio control plane components should also be restarted at the new version:

$ kubectl get pods --namespace istio-system \
  -o=jsonpath='{range .items[*]}{"\n"}{}{":\t"}{range .spec.containers[*]}{.image}{", "}{end}{"\n"}{end}'

Canary Upgrade

The process for canary upgrade is similar to the canary upgrade with istioctl.

For example, to upgrade the revision of Istio installed in the previous section, first verify that the IstioOperator CR named example-istiocontrolplane exists in your cluster:

$ kubectl get iop --all-namespaces
NAMESPACE      NAME                        REVISION   STATUS    AGE
istio-system   example-istiocontrolplane              HEALTHY   11m

Download and extract the istioctl corresponding to the version of Istio you wish to upgrade to. Then, run the following command to install the new target revision of the Istio control plane based on the in-cluster IstioOperator CR (here, we assume the target revision is 1.8.1):

$ istio-1.8.1/bin/istioctl operator init --revision 1-8-1

Make a copy of the example-istiocontrolplane CR and save it in a file named example-istiocontrolplane-1-8-1.yaml. Change the name to example-istiocontrolplane-1-8-1 and add revision: 1-8-1 to the CR. Your updated IstioOperator CR should look something like this:

$ cat example-istiocontrolplane-1-8-1.yaml
kind: IstioOperator
  namespace: istio-system
  name: example-istiocontrolplane-1-8-1
  revision: 1-8-1

Apply the updated IstioOperator CR to the cluster. After that, you will have two control plane deployments and services running side-by-side:

$ kubectl get pod -n istio-system -l app=istiod
NAME                            READY   STATUS    RESTARTS   AGE
istiod-1-8-1-597475f4f6-bgtcz   1/1     Running   0          64s
istiod-6ffcc65b96-bxzv5         1/1     Running   0          2m11s
$ kubectl get svc -n istio-system -l app=istiod
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                         AGE
istiod         ClusterIP   <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP,853/TCP   2m35s
istiod-1-8-1   ClusterIP     <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP           88s

To complete the upgrade, label the workload namespaces with and restart the workloads, as explained in the Data plane upgrade documentation.


If you used the operator to perform a canary upgrade of the control plane, you can uninstall the old control plane and keep the new one by deleting the old in-cluster IstioOperator CR, which will uninstall the old revision of Istio:

$ kubectl delete -n istio-system example-istiocontrolplane

Wait until Istio is uninstalled - this may take some time.

Then you can remove the Istio operator for the old revision by running the following command:

$ istioctl operator remove --revision <revision>

If you omit the revision flag, then all revisions of Istio operator will be removed.

Note that deleting the operator before the IstioOperator CR and corresponding Istio revision are fully removed may result in leftover Istio resources. To clean up anything not removed by the operator:

$ istioctl manifest generate | kubectl delete -f -
$ kubectl delete ns istio-system --grace-period=0 --force
Was this information useful?
Do you have any suggestions for improvement?

Thanks for your feedback!