Demystifying Istio's Sidecar Injection Model
De-mystify how Istio manages to plugin its data-plane components into an existing deployment.
A simple overview of an Istio service-mesh architecture always starts with describing the control-plane and data-plane.
It is important to understand that the sidecar injection into the application pods happens automatically, though manual injection is also possible. Traffic is directed from the application services to and from these sidecars without developers needing to worry about it. Once the applications are connected to the Istio service mesh, developers can start using and reaping the benefits of all that the service mesh has to offer. However, how does the data plane plumbing happen and what is really required to make it work seamlessly? In this post, we will deep-dive into the specifics of the sidecar injection models to gain a very clear understanding of how sidecar injection works.
Sidecar injection
In simple terms, sidecar injection is adding the configuration of additional containers to the pod template. The added containers needed for the Istio service mesh are:
istio-init
This init container is used to setup the iptables
rules so that inbound/outbound traffic will go through the sidecar proxy. An init container is different than an app container in following ways:
- It runs before an app container is started and it always runs to completion.
- If there are many init containers, each should complete with success before the next container is started.
So, you can see how this type of container is perfect for a set-up or initialization job which does not need to be a part of the actual application container. In this case, istio-init
does just that and sets up the iptables
rules.
istio-proxy
This is the actual sidecar proxy (based on Envoy).
Manual injection
In the manual injection method, you can use istioctl
to modify the pod template and add the configuration of the two containers previously mentioned. For both manual as well as automatic injection, Istio takes the configuration from the istio-sidecar-injector
configuration map (configmap) and the mesh’s istio
configmap.
Let’s look at the configuration of the istio-sidecar-injector
configmap, to get an idea of what actually is going on.
As you can see, the configmap contains the configuration for both, the istio-init
init container and the istio-proxy
proxy container. The configuration includes the name of the container image and arguments like interception mode, capabilities, etc.
From a security point of view, it is important to note that istio-init
requires NET_ADMIN
capabilities to modify iptables
within the pod’s namespace and so does istio-proxy
if configured in TPROXY
mode. As this is restricted to a pod’s namespace, there should be no problem. However, I have noticed that recent open-shift versions may have some issues with it and a workaround is needed. One such option is mentioned at the end of this post.
To modify the current pod template for sidecar injection, you can:
OR
To use modified configmaps or local configmaps:
Create
inject-config.yaml
andmesh-config.yaml
from the configmapsModify the existing pod template, in my case,
demo-red.yaml
:Apply the
demo-red-injected.yaml
As seen above, we create a new template using the sidecar-injector
and the mesh configuration to then apply that new template using kubectl
. If we look at the injected YAML file, it has the configuration of the Istio-specific containers, as we discussed above. Once we apply the injected YAML file, we see two containers running. One of them is the actual application container, and the other is the istio-proxy
sidecar.
The count is not 3 because the istio-init
container is an init type container that exits after doing what it supposed to do, which is setting up the iptable
rules within the pod. To confirm the init container exit, let’s look at the output of kubectl describe
:
As seen in the output, the State
of the istio-init
container is Terminated
with the Reason
being Completed
. The only two containers running are the main application demo-red
container and the istio-proxy
container.
Automatic injection
Most of the times, you don’t want to manually inject a sidecar every time you deploy an application, using the istioctl
command, but would prefer that Istio automatically inject the sidecar to your pod. This is the recommended approach and for it to work, all you need to do is to label the namespace where you are deploying the app with istio-injection=enabled
.
Once labeled, Istio injects the sidecar automatically for any pod you deploy in that namespace. In the following example, the sidecar gets automatically injected in the deployed pods in the istio-dev
namespace.
But how does this work? To get to the bottom of this, we need to understand Kubernetes admission controllers.
From Kubernetes documentation:
For automatic sidecar injection, Istio relies on Mutating Admission Webhook
. Let’s look at the details of the istio-sidecar-injector
mutating webhook configuration.
This is where you can see the webhook namespaceSelector
label that is matched for sidecar injection with the label istio-injection: enabled
. In this case, you also see the operations and resources for which this is done when the pods are created. When an apiserver
receives a request that matches one of the rules, the apiserver
sends an admission review request to the webhook service as specified in the clientConfig:
configuration with the name: istio-sidecar-injector
key-value pair. We should be able to see that this service is running in the istio-system
namespace.
This configuration ultimately does pretty much the same as we saw in manual injection. Just that it is done automatically during pod creation, so you won’t see the change in the deployment. You need to use kubectl describe
to see the sidecar proxy and the init proxy.
The automatic sidecar injection not only depends on the namespaceSelector
mechanism of the webhook, but also on the default injection policy and the per-pod override annotation.
If you look at the istio-sidecar-injector
ConfigMap again, it has the default injection policy defined. In our case, it is enabled by default.
You can also use the annotation sidecar.istio.io/inject
in the pod template to override the default policy. The following example disables the automatic injection of the sidecar for the pods in a Deployment
.
This example shows there are many variables, based on whether the automatic sidecar injection is controlled in your namespace, ConfigMap, or pod and they are:
- webhooks
namespaceSelector
(istio-injection: enabled
) - default policy (Configured in the ConfigMap
istio-sidecar-injector
) - per-pod override annotation (
sidecar.istio.io/inject
)
The injection status table shows a clear picture of the final injection status based on the value of the above variables.
Traffic flow from application container to sidecar proxy
Now that we are clear about how a sidecar container and an init container are injected into an application manifest, how does the sidecar proxy grab the inbound and outbound traffic to and from the container? We did briefly mention that it is done by setting up the iptable
rules within the pod namespace, which in turn is done by the istio-init
container. Now, it is time to verify what actually gets updated within the namespace.
Let’s get into the application pod namespace we deployed in the previous section and look at the configured iptables. I am going to show an example using nsenter
. Alternatively, you can enter the container in a privileged mode to see the same information. For folks without access to the nodes, using exec
to get into the sidecar and running iptables
is more practical.
The output above clearly shows that all the incoming traffic to port 80, which is the port our red-demo
application is listening, is now REDIRECTED
to port 15001
, which is the port that the istio-proxy
, an Envoy proxy, is listening. The same holds true for the outgoing traffic.
This brings us to the end of this post. I hope it helped to de-mystify how Istio manages to inject the sidecar proxies into an existing deployment and how Istio routes the traffic to the proxy.