days to Istio 1.5

Distributed Tracing FAQ

How does distributed tracing work with Istio?

Istio integrates with distributed tracing systems in two different ways: Envoy-based and Mixer-based tracing integrations. For both tracing integration approaches, applications are responsible for forwarding tracing headers for subsequent outgoing requests.

You can find additional information in the Istio Distributed Tracing (Jaeger, LightStep, Zipkin) Tasks and in the Envoy tracing docs.

What is required for distributed tracing with Istio?

Istio enables reporting of trace spans for workload-to-workload communications within a mesh. However, in order for various trace spans to be stitched together for a complete view of the traffic flow, applications must propagate the trace context between incoming and outgoing requests.

In particular, Istio relies on applications to propagate the B3 trace headers, as well as the Envoy-generated request ID. These headers include:

  • x-request-id
  • x-b3-traceid
  • x-b3-spanId
  • x-b3-parentspanid
  • x-b3-sampled
  • x-b3-flags
  • b3

If you are using LightStep, you will also need to forward the following headers:

  • x-ot-span-context

Header propagation may be accomplished through client libraries, such as Zipkin or Jaeger. It may also be accomplished manually, as documented in the Distributed Tracing Task.

How does Envoy-based tracing work?

For Envoy-based tracing integrations, Envoy (the sidecar proxy) sends tracing information directly to tracing backends on behalf of the applications being proxied.

Envoy:

  • generates request IDs and trace headers (i.e. X-B3-TraceId) for requests as they flow through the proxy
  • generates trace spans for each request based on request and response metadata (i.e. response time)
  • sends the generated trace spans to the tracing backends
  • forwards the trace headers to the proxied application

Istio supports the Envoy-based integrations of LightStep and Zipkin, as well as all Zipkin API-compatible backends, including Jaeger.

How does Mixer-based tracing work?

For Mixer-based tracing integrations, Mixer (addressed through the istio-telemetry service) provides the integration with tracing backends. The Mixer integration allows additional levels of operator control of the distributed tracing, including fine-grained selection of the data included in trace spans. It also provides the ability to send traces to backends not supported by Envoy directly.

For Mixer-based integrations, Envoy:

  • generates request IDs and trace headers (i.e. X-B3-TraceId) for requests as they flow through the proxy
  • calls Mixer for general asynchronous telemetry reporting
  • forwards the trace headers to the proxied application

Mixer:

  • generates trace spans for each request based on operator-supplied configuration
  • sends the generated trace spans to the operator-designated tracing backends

The Stackdriver tracing integration with Istio is one example of a tracing integration via Mixer.

What is the minimal Istio configuration required for distributed tracing?

The Istio minimal profile with tracing enabled is all that is required for Istio to integrate with Zipkin-compatible backends.

What generates the initial Zipkin (B3) HTTP headers?

The Istio sidecar proxy (Envoy) generates the initial headers, if they are not provided by the request.

Why can't Istio propagate headers instead of the application?

Although an Istio sidecar will process both inbound and outbound requests for an associated application instance, it has no implicit way of correlating the outbound requests to the inbound request that caused them. The only way this correlation can be achieved is if the application propagates relevant information (i.e. headers) from the inbound request to the outbound requests. Header propagation may be accomplished through client libraries or manually. Further discussion is provided in What is required for distributed tracing with Istio?.

Why are my requests not being traced?

Since Istio 1.0.3, the sampling rate for tracing has been reduced to 1% in the default configuration profile. This means that only 1 out of 100 trace instances captured by Istio will be reported to the tracing backend. The sampling rate in the demo profile is still set to 100%. See this section for more information on how to set the sampling rate.

If you still do not see any trace data, please confirm that your ports conform to the Istio port naming conventions and that the appropriate container port is exposed (via pod spec, for example) to enable traffic capture by the sidecar proxy (Envoy).

If you only see trace data associated with the egress proxy, but not the ingress proxy, it may still be related to the Istio port naming conventions. Starting with Istio 1.3 the protocol for outbound traffic is automatically detected.

How can I control the volume of traces?

Istio, via Envoy, currently supports a percentage-based sampling strategy for trace generation. Please see this section for more information on how to set this sampling rate.

How do I disable tracing?

If you already have installed Istio with tracing enabled, you can disable it as follows:

# Fill <istio namespace> with the namespace of your istio mesh.Ex: istio-system
TRACING_POD=`kubectl get po -n <istio namespace> | grep istio-tracing | awk '{print $1}'`
$ kubectl delete pod $TRACING_POD -n <istio namespace>
$ kubectl delete services tracing zipkin   -n <istio namespace>
# Remove reference of zipkin url from mixer deployment
$ kubectl -n istio-system edit deployment istio-telemetry
# Now, manually remove instances of trace_zipkin_url from the file and save it.

Then follow the steps of the cleanup section of the Distributed Tracing task.

If you don’t want tracing functionality at all, then disable tracing when installing Istio.

Can Istio send tracing information to an external Zipkin-compatible backend?

To do so, you must you use the fully qualified domain name of the Zipkin-compatible instance. For example: zipkin.mynamespace.svc.cluster.local.

Does Istio support request tracing for vert.x event bus messages?

Istio does not currently provide support for pub/sub and event bus protocols. Any use of those technologies is best-effort and subject to breakage.

What role does Mixer play in the Istio tracing story?

By default, Mixer participates in tracing by generating its own spans for requests that are already selected for tracing by Envoy proxies. This enables operators to observe the participation of the mixer-based policy enforcement mechanisms within the mesh. If the istio-policy configuration is disabled mesh-wide, Mixer does not participate in tracing in this way.

Mixer, operating as the istio-telemetry service, can also be used to generate trace spans for data plane traffic. Mixer’s Stackdriver adapter is an example of an adapter that supports this capability.

For Mixer-generated traces, Istio still relies on Envoy to generate trace context and to forward it to the applications that must propagate the context. Instead of Envoy itself sending trace information directly to a tracing backend, Mixer distills client and server spans from the regular Envoy reporting for each request based on operator-supplied configuration. In this way, operators can precisely control when and how trace data is generated and perhaps remove certain services entirely from a trace or provide more detailed information for certain namespaces.

Why do I see `istio-mixer` spans in some of my distributed traces?

Mixer generates application-level traces for requests that reach Mixer with tracing headers. Mixer generates spans, labeled istio-mixer for any critical work that it does, including dispatch to individual adapters.

Envoy caches calls to Mixer on the data path. As a result, calls out to Mixer made via the istio-policy service only happen for certain requests, for example: cache-expiry or different request characteristics. For this reason, you only see Mixer participate in some of your traces.

To turn off the application-level trace spans for Mixer itself, you must edit the deployment configuration for istio-policy and remove the --trace_zipkin_url command-line parameter.