Envoy performance

1737 views
Skip to first unread message

Ravi Gadde

unread,
Dec 22, 2016, 7:52:51 PM12/22/16
to Envoy Users
IIUC, performance is not the goal of Envoy. But I want to make sure I am not totally off base with the numbers I am seeing. The below numbers are with limiting Envoy to one core (Intel Xeon) using docker cpuset. CPU maxes out at around 13K reqs/sec which is less than 100 Mbps if I am doing my math right. Can anyone confirm if this is inline with expectations or off base?

Proxy to a simple http server running in another container on the same node:
docker run -d -p 9090:80 --cpuset-cpus="1" -v /root/envoy/front-envoy.json:/etc/front-envoy.json:ro aecf900887a8


Client on another node connected with 40G interface:

./hey -c 1000 -n 1000000 http://20.1.1.2:9090

Summary:
  Total: 75.9687 secs
  Slowest: 1.0501 secs
  Fastest: 0.0002 secs
  Average: 0.0568 secs
  Requests/sec: 13163.3233
  Total data: 607997340 bytes
  Size/request: 607 bytes

Status code distribution:
  [200] 992788 responses
  [503] 7212 responses
Thanks,
Ravi

Matt Klein

unread,
Dec 22, 2016, 8:08:32 PM12/22/16
to Ravi Gadde, Envoy Users
It's very difficult to make any statements based on the data provided. Performance is obviously highly dependent on the system, the generator, the backend, the payload, request pattern, configuration, etc.

Over the past several months we have done quite a bit of performance tuning on Envoy and on very basic raw benchmarks it is inline with nginx (within about 10-15% for HTTP/1.1 -> HTTP/1.1, and equal or faster for HTTP/2 -> HTTP/1.1).

There are certain features enabled by default in Envoy that are expensive from a performance standpoint, such as request ID generation and dynamic stat generation. I have attached some configurations that we have been using to do the nginx comparisons. The Envoy configurations disable request ID generation as well as dynamic stat generation to make it more of an apples/apples comparison with nginx.

For load generation we have been using h2load from nhttp2 with a command line such as:
h2load http://127.0.0.1:9211/  --h1 -c 20 -n 2000000

Thanks,
Matt

--
You received this message because you are subscribed to the Google Groups "Envoy Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to envoy-users+unsubscribe@googlegroups.com.
To post to this group, send email to envoy...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/envoy-users/4ba03aa2-0fc3-4797-a607-bbed3e4ebab2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Matt Klein
Software Engineer
mkl...@lyft.com / 206.327.4515
envoy-under-test.json
nginx-backend.conf
nginx-under-test.json

Ravi Gadde

unread,
Dec 22, 2016, 8:35:08 PM12/22/16
to Matt Klein, Envoy Users
Thanks Matt, I will try this config and see how much of a difference it makes. FWIW, I see that envoy performed better than nginx in this test (by about 50% :)). But I was surprised to see the low overall throughput possible with one core.

Best,
Ravi

Ravi Gadde

unread,
Dec 23, 2016, 7:21:47 PM12/23/16
to Matt Klein, Envoy Users
I saw about 20% higher throughput with request ID and dynamic stats disabled.

Thanks,
Ravi

Matt Klein

unread,
Dec 24, 2016, 11:36:00 AM12/24/16
to Ravi Gadde, Envoy Users
Cool. If performance is an issue for you, and you can provide detailed repro instructions on your test setup, I can do some profiling to see if there is any other low hanging fruit, but given that your benchmark is already showing Envoy to be faster than nginx I'm doubtful there is going to be anything obvious.


For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages