Large Scale Security Policy Performance Tests

The effect of security policies on latency of requests


Istio has a wide range of security policies which can be easily configured into systems of services. As the number of applied policies increases, it is important to understand the relationship of latency, memory usage, and CPU usage of the system.

This blog post goes over common security policies use cases and how the number of security policies or the number of specific rules in a security policy can affect the overall latency of requests.


There are a wide range of security policies and many more combinations of those policies. We will go over 6 of the most commonly used test cases.

The following test cases are run in an environment which consists of a Fortio client sending requests to a Fortio server, with a baseline of no Envoy sidecars deployed. The following data was gathered by using the Istio performance benchmarking tool.

Environment setup

In these test cases, requests either do not match any rules or match only the very last rule in the security policies. This ensures that the RBAC filter is applied to all policy rules, and never matches a policy rule before before viewing all the policies. Even though this is not necessarily what will happen in your own system, this policy setup provides data for the worst possible performance of each test case.

Test cases

  1. Mutual TLS STRICT vs plaintext.

  2. A single authorization policy with a variable number of principal rules as well as a PeerAuthentication policy. The principal rule is dependent on the PeerAuthentication policy being applied to the system.

  3. A single authorization policy with a variable number of requestPrincipal rules as well as a RequestAuthentication policy. The requestPrincipal is dependent on the RequestAuthentication policy being applied to the system.

  4. A single authorization policy with a variable number of paths vs sourceIP rules.

  5. A variable number of authorization policies consisting of a single path or sourceIP rule.

  6. A single RequestAuthentication policy with variable number of JWTRules rules.


The y-axis of each test is the latency in milliseconds, and the x-axis is the number of concurrent connections. The x-axis of each graph consists of 3 data points that represent a small load (qps=100, conn=8), medium load (qps=500, conn=32), and large load (qps=1000, conn=64).

MTLS vs plaintext
The difference of latency between MTLS mode STRICT and plaintext is very small in lower loads. As the qps and conn increase, the latency of requests with MTLS STRICT increases. The additional latency increased in larger loads is minimal compared to that of the increase from having no sidecars to having sidecars in the plaintext.


  • In general, adding security policies does not add relatively high overhead to the system. The policies that add the most latency include:

    1. Authorization policy with JWTRules rules.

    2. Authorization policy with requestPrincipal rules.

    3. Authorization policy with principals rules.

  • In lower loads (requests with lower qps and conn) the difference in latency for most policies is minimal.

  • Envoy proxy sidecars increase latency more than most policies, even if the policies are large.

  • The latency increase of extremely large policies is relatively similar to the latency increase of adding Envoy proxy sidecars compared to that of no sidecars.

  • Two different tests determined that the sourceIP rule is marginally slower than a path rule.

If you are interested in creating your own large scale security policies and running performance tests with them, see the performance benchmarking tool README.

If you are interested in reading more about the security policies tests, see our design doc. If you don’t already have access, you can join the Istio team drive.

Was this information useful?
Do you have any suggestions for improvement?

Thanks for your feedback!