Collecting Metrics for TCP Services

This task shows how to configure Istio to automatically gather telemetry for TCP services in a mesh. At the end of this task, you can query default TCP metrics for your mesh.

The Bookinfo sample application is used as the example throughout this task.

Before you begin

  • Install Istio in your cluster and deploy an application.

  • This task assumes that the Bookinfo sample will be deployed in the default namespace. If you use a different namespace, update the example configuration and commands.

Collecting new telemetry data

  1. Setup Bookinfo to use MongoDB.

    1. Install v2 of the ratings service.

      If you are using a cluster with automatic sidecar injection enabled, deploy the services using kubectl:

      Zip
      $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@
      

      If you are using manual sidecar injection, run the following command instead:

      Zip
      $ kubectl apply -f <(istioctl kube-inject -f @samples/bookinfo/platform/kube/bookinfo-ratings-v2.yaml@)
      deployment "ratings-v2" configured
      
    2. Install the mongodb service:

      If you are using a cluster with automatic sidecar injection enabled, deploy the services using kubectl:

      Zip
      $ kubectl apply -f @samples/bookinfo/platform/kube/bookinfo-db.yaml@
      

      If you are using manual sidecar injection, run the following command instead:

      Zip
      $ kubectl apply -f <(istioctl kube-inject -f @samples/bookinfo/platform/kube/bookinfo-db.yaml@)
      service "mongodb" configured
      deployment "mongodb-v1" configured
      
    3. The Bookinfo sample deploys multiple versions of each microservice, so begin by creating destination rules that define the service subsets corresponding to each version, and the load balancing policy for each subset.

      Zip
      $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all.yaml@
      

      If you enabled mutual TLS, run the following command instead:

      Zip
      $ kubectl apply -f @samples/bookinfo/networking/destination-rule-all-mtls.yaml@
      

      To display the destination rules, run the following command:

      $ kubectl get destinationrules -o yaml
      

      Wait a few seconds for destination rules to propagate before adding virtual services that refer to these subsets, because the subset references in virtual services rely on the destination rules.

    4. Create ratings and reviews virtual services:

      Zip
      $ kubectl apply -f @samples/bookinfo/networking/virtual-service-ratings-db.yaml@
      Created config virtual-service/default/reviews at revision 3003
      Created config virtual-service/default/ratings at revision 3004
      
  2. Send traffic to the sample application.

    For the Bookinfo sample, visit http://$GATEWAY_URL/productpage in your web browser or use the following command:

    $ curl http://$GATEWAY_URL/productpage
    
  3. Verify that the TCP metric values are being generated and collected.

    In a Kubernetes environment, setup port-forwarding for Prometheus by using the following command:

    $ kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 &
    

    View the values for the TCP metrics in the Prometheus browser window. Select Graph. Enter the istio_tcp_connections_opened_total metric or istio_tcp_connections_closed_total and select Execute. The table displayed in the Console tab includes entries similar to:

    istio_tcp_connections_opened_total{
    destination_version="v1",
    instance="172.17.0.18:42422",
    job="istio-mesh",
    canonical_service_name="ratings-v2",
    canonical_service_revision="v2"}
    
    istio_tcp_connections_closed_total{
    destination_version="v1",
    instance="172.17.0.18:42422",
    job="istio-mesh",
    canonical_service_name="ratings-v2",
    canonical_service_revision="v2"}
    

Understanding TCP telemetry collection

In this task, you used Istio configuration to automatically generate and report metrics for all traffic to a TCP service within the mesh. TCP Metrics for all active connections are recorded every 15s by default and this timer is configurable via [tcpReportingDurationconfig](/docs/reference/config/proxy_extensions/stats/#PluginConfig). Metrics for a connection are also recorded at the end of the connection.

TCP attributes

Several TCP-specific attributes enable TCP policy and control within Istio. These attributes are generated by Envoy Proxies and obtained from Istio using Envoy’s Node Metadata. Envoy forwards Node Metadata to Peer Envoys using ALPN based tunneling and a prefix based protocol. We define a new protocol istio-peer-exchange, that is advertised and prioritized by the client and the server sidecars in the mesh. ALPN negotiation resolves the protocol to istio-peer-exchange for connections between Istio enabled proxies, but not between an Istio enabled proxy and any other proxy. This protocol extends TCP as follows:

  1. TCP client, as a first sequence of bytes, sends a magic byte string and a length prefixed payload.
  2. TCP server, as a first sequence of bytes, sends a magic byte sequence and a length prefixed payload. These payloads are protobuf encoded serialized metadata.
  3. Client and server can write simultaneously and out of order. The extension filter in Envoy then does the further processing in downstream and upstream until either the magic byte sequence is not matched or the entire payload is read.
Attribute Generation Flow for TCP Services in an Istio Mesh.
TCP Attribute Flow

Cleanup

  • Remove the port-forward process:

    $ killall kubectl
    
  • If you are not planning to explore any follow-on tasks, refer to the Bookinfo cleanup instructions to shutdown the application.

Was this information useful?
Do you have any suggestions for improvement?

Thanks for your feedback!