Prometheus
Prometheus is an open source monitoring system and time series database. You can use Prometheus with Istio to record metrics that track the health of Istio and of applications within the service mesh. You can visualize metrics using tools like Grafana and Kiali.
Installation
Option 1: Quick start
Istio provides a basic sample installation to quickly get Prometheus up and running:
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.17/samples/addons/prometheus.yaml
This will deploy Prometheus into your cluster. This is intended for demonstration only, and is not tuned for performance or security.
Option 2: Customizable install
Consult the Prometheus documentation to get started deploying Prometheus into your environment. See Configuration for more information on configuring Prometheus to scrape Istio deployments.
Configuration
In an Istio mesh, each component exposes an endpoint that emits metrics. Prometheus works by scraping these endpoints and collecting the results. This is configured through the Prometheus configuration file which controls settings for which endpoints to query, the port and path to query, TLS settings, and more.
To gather metrics for the entire mesh, configure Prometheus to scrape:
- The control plane (
istiod
deployment) - Ingress and Egress gateways
- The Envoy sidecar
- The user applications (if they expose Prometheus metrics)
To simplify the configuration of metrics, Istio offers two modes of operation.
Option 1: Metrics merging
To simplify configuration, Istio has the ability to control scraping entirely by prometheus.io
annotations. This allows Istio scraping to work out of the box with standard configurations such as the ones provided by the Helm stable/prometheus
charts.
This option is enabled by default but can be disabled by passing --set meshConfig.enablePrometheusMerge=false
during installation. When enabled, appropriate prometheus.io
annotations will be added to all data plane pods to set up scraping. If these annotations already exist, they will be overwritten. With this option, the Envoy sidecar will merge Istio’s metrics with the application metrics. The merged metrics will be scraped from :15020/stats/prometheus
.
This option exposes all the metrics in plain text.
This feature may not suit your needs in the following situations:
- You need to scrape metrics using TLS.
- Your application exposes metrics with the same names as Istio metrics. For example, your application metrics expose an
istio_requests_total
metric. This might happen if the application is itself running Envoy. - Your Prometheus deployment is not configured to scrape based on standard
prometheus.io
annotations.
If required, this feature can be disabled per workload by adding a prometheus.istio.io/merge-metrics: "false"
annotation on a pod.
Option 2: Customized scraping configurations
To configure an existing Prometheus instance to scrape stats generated by Istio, several jobs need to be added.
- To scrape
Istiod
stats, the following example job can be added to scrape itshttp-monitoring
port:
- job_name: 'istiod'
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- istio-system
relabel_configs:
- source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: istiod;http-monitoring
- To scrape Envoy stats, including sidecar proxies and gateway proxies, the following job can be added to scrape ports that end with
-envoy-prom
:
- job_name: 'envoy-stats'
metrics_path: /stats/prometheus
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
action: keep
regex: '.*-envoy-prom'
- For application stats, if Strict mTLS is not enabled, your existing scraping configuration should still work. Otherwise, Prometheus needs to be configured to scrape with Istio certs.
TLS settings
The control plane, gateway, and Envoy sidecar metrics will all be scraped over plaintext. However, the application metrics will follow whatever Istio configuration has been configured for the workload. In particular, if Strict mTLS is enabled, then Prometheus will need to be configured to scrape using Istio certificates.
One way to provision Istio certificates for Prometheus is by injecting a sidecar which will rotate SDS certificates and output them to a volume that can be shared with Prometheus. However, the sidecar should not intercept requests for Prometheus because Prometheus’s model of direct endpoint access is incompatible with Istio’s sidecar proxy model.
To achieve this, configure a cert volume mount on the Prometheus server container:
containers:
- name: prometheus-server
...
volumeMounts:
mountPath: /etc/prom-certs/
name: istio-certs
volumes:
- emptyDir:
medium: Memory
name: istio-certs
Then add the following annotations to the Prometheus deployment pod template, and deploy it with sidecar injection. This configures the sidecar to write a certificate to the shared volume, but without configuring traffic redirection:
spec:
template:
metadata:
annotations:
traffic.sidecar.istio.io/includeInboundPorts: "" # do not intercept any inbound ports
traffic.sidecar.istio.io/includeOutboundIPRanges: "" # do not intercept any outbound traffic
proxy.istio.io/config: | # configure an env variable `OUTPUT_CERTS` to write certificates to the given folder
proxyMetadata:
OUTPUT_CERTS: /etc/istio-output-certs
sidecar.istio.io/userVolumeMount: '[{"name": "istio-certs", "mountPath": "/etc/istio-output-certs"}]' # mount the shared volume at sidecar proxy
Finally, set the scraping job TLS context as follows:
scheme: https
tls_config:
ca_file: /etc/prom-certs/root-cert.pem
cert_file: /etc/prom-certs/cert-chain.pem
key_file: /etc/prom-certs/key.pem
insecure_skip_verify: true # Prometheus does not support Istio security naming, thus skip verifying target pod certificate
Best practices
For larger meshes, advanced configuration might help Prometheus scale. See Using Prometheus for production-scale monitoring for more information.