Extremely high message round-trip latency on a VM compared to barebone servers (factor of 40)

613 views
Skip to first unread message

Vitaly Aminev

unread,
Jan 20, 2016, 10:31:46 AM1/20/16
to rabbitmq-users
https://github.com/rabbitmq/rabbitmq-server/issues/564
This is the detailed description, I assume that people didn't care to read and marked as closed - But as for me this is a critical issue.

I've written a proper benchmark since then and the results are the following: we are comparing VM vs barebone machine


0. Install node.js 5.x.x
2. npm install
3. npm run bench - for local testing
4. RABBITMQ_PORT_5672_TCP_ADDR=192.168.99.100 RABBITMQ_PORT_5672_TCP_PORT=5672 npm run bench - for custom host/port

Results for me:

```

Vitalys-iMac:ms-amqp-transport vitaly$ npm run bench


> ms-amqp-transport@ bench /Users/vitaly/projects/ms-amqp-transport

> npm run compile && node ./bench/roundtrip.js



> ms-amqp-transport@ compile /Users/vitaly/projects/ms-amqp-transport

> babel -d ./lib ./src


src/amqp.js -> lib/amqp.js

src/index.js -> lib/index.js

src/serialization.js -> lib/serialization.js

Messages sent: 7010

Mean is 0.6884782780856182ms ~2.3287553082175685%

Total time is 6.199s 0.0006884782780856182s

```


For rabbitmq running on a virtual machine with similar settings, lots of CPU and RAM available. 40 times worse


```

Vitalys-iMac:ms-amqp-transport vitaly$ RABBITMQ_PORT_5672_TCP_ADDR=192.168.99.100 npm run bench


> ms-amqp-transport@ bench /Users/vitaly/projects/ms-amqp-transport

> npm run compile && node ./bench/roundtrip.js



> ms-amqp-transport@ compile /Users/vitaly/projects/ms-amqp-transport

> babel -d ./lib ./src


src/amqp.js -> lib/amqp.js

src/index.js -> lib/index.js

src/serialization.js -> lib/serialization.js

Messages sent: 122

Mean is 40.89818468032786ms ~3.8874356921506252%

Total time is 5.976s 0.04089818468032786s

```

I've used this bench on cloud machines (login there via ssh, do steps 0-3) - results are always similar to any other VM out there, even with KVM in my colocation.
So, either there is something very wrong with my setup, which I have tried to change and tune in many ways, or with rabbitmq on VMs

Please help in resolving this issue

Vitaly Aminev

unread,
Jan 20, 2016, 10:32:38 AM1/20/16
to rabbitmq-users

Michael Klishin

unread,
Jan 20, 2016, 10:36:58 AM1/20/16
to rabbitm...@googlegroups.com, Vitaly Aminev
On 20 January 2016 at 18:31:49, Vitaly Aminev (lath...@gmail.com) wrote:
> 0. Install node.js 5.x.x
> 1. clone https://github.com/makeomatic/ms-amqp-transport
> 2. npm install
> 3. npm run bench - for local testing
> 4. RABBITMQ_PORT_5672_TCP_ADDR=192.168.99.100 RABBITMQ_PORT_5672_TCP_PORT=5672
> npm run bench - for custom host/port

Please configure all queues to use lazy mode before doing any benchmarking: it will result
in significantly less variable throughput (and, likely, latency):
http://rabbitmq.com/lazy-queues.html

As for why latency can vary so much, there can be a lot of reasons. Doing a protocol capture
with Wireshark/libpcap will provide you a lot more data on what's going on at the TCP level:
http://www.rabbitmq.com/amqp-wireshark.html

For example, if your benchmark opens new connections all the time, the latency of opening
a new TCP connection can make a lot of difference, both in general and with 3.6.0 in particular [1]. This is just one of the examples.

1. https://github.com/rabbitmq/rabbitmq-server/issues/528 
--
MK

Staff Software Engineer, Pivotal/RabbitMQ


Michael Klishin

unread,
Jan 20, 2016, 10:38:51 AM1/20/16
to rabbitm...@googlegroups.com, Vitaly Aminev
On 20 January 2016 at 18:32:40, Vitaly Aminev (lath...@gmail.com) wrote:
> Bench itself https://github.com/makeomatic/ms-amqp-transport/blob/master/bench/roundtrip.js

I'm not familiar with the benchmarking library used, or the client (is it amqplib?)

Can you truncate RabbitMQ log file, do a run, and then post the log, so that we can see
how many connections it opens? As I mentioned earlier, it matters a lot. 

Vitaly Aminev

unread,
Jan 20, 2016, 10:45:59 AM1/20/16
to rabbitmq-users, lath...@gmail.com

It opens 2 connections: 1 consumer and 1 publisher. Library used is https://github.com/dropbox/amqp-coffee


rabbitmq     | =INFO REPORT==== 20-Jan-2016::15:39:25 ===

rabbitmq     | accepting AMQP connection <0.719.0> (192.168.99.1:54841 -> 172.17.0.3:5672)

rabbitmq     | 

rabbitmq     | =INFO REPORT==== 20-Jan-2016::15:39:25 ===

rabbitmq     | accepting AMQP connection <0.722.0> (192.168.99.1:54842 -> 172.17.0.3:5672)

rabbitmq     | 

rabbitmq     | =INFO REPORT==== 20-Jan-2016::15:39:31 ===

rabbitmq     | closing AMQP connection <0.719.0> (192.168.99.1:54841 -> 172.17.0.3:5672)

rabbitmq     | 

rabbitmq     | =INFO REPORT==== 20-Jan-2016::15:39:31 ===

rabbitmq     | closing AMQP connection <0.722.0> (192.168.99.1:54842 -> 172.17.0.3:5672)


Situation is exactly the same on rabbitmq 3.5.7, Lazy queues were not existent back then, therefore I don't see how this would affect the queue.
This is the benchmarking lib: https://github.com/bestiejs/benchmark.js

TCP sniffing is obviously something I haven't done and not going to, because other services like redis with it's simplistic and useless pubsub perform well, so that it's not a connection issue.
Time is lost inside RabbitMQ - question is why. 

Furthermore, I've tried tracing messages with firehose and rabbitmq reports that it gets a message and immediately dispatches it (at least stamps are exactly the same on the logs). So something is eating tons of ms after the message is published to rabbitmq and before it gets into queue.
On top of it - if I enabled confirm mode for messages - latency spikes by exactly 2 times

среда, 20 января 2016 г., 18:38:51 UTC+3 пользователь Michael Klishin написал:

Alvaro Videla

unread,
Jan 20, 2016, 10:56:07 AM1/20/16
to rabbitm...@googlegroups.com, lath...@gmail.com
On Wed, Jan 20, 2016 at 4:45 PM, Vitaly Aminev <lath...@gmail.com> wrote:
TCP sniffing is obviously something I haven't done and not going to, because other services like redis with it's simplistic and useless pubsub perform well, so that it's not a connection issue.
Time is lost inside RabbitMQ - question is why. 

How do you know this is not a connection issue? RabbitMQ runs inside the Erlang virtual machine which has to resolve hostnames and so on for RabbitMQ. If you are running Erlang inside a VM, perhaps localhost is not resolving properly for Erlang? What are your /etc/hosts settings? For example, I have these settings:

127.0.0.1     mymachine.local
127.0.0.1     mymachine

and so on.


Vitaly Aminev

unread,
Jan 20, 2016, 11:05:33 AM1/20/16
to Alvaro Videla, rabbitm...@googlegroups.com
So you are saying that rabbitmq constantly resolves DNS during runtime (non-clustered setup on a single machine) ?

1. client connects to rabbitmq via ip address
2. tcp connection is established and used during benchmark
3, client send a message to rabbitmq and 
4. rabbitmq starts resolving DNS? — I hope it doesn’t or if it does - it’s cached
5. etc

Anyway, hosts for local bench and other relevant configuration

Vitalys-iMac:cappasity-deploy vitaly$ cat /etc/hosts
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1             localhost 

Hosts for VM bench:

/ # cat /etc/hosts
172.17.0.3 cappasity-dev
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Settings for local/VM rabbit

  204 rabbitmq   0:00 /usr/lib/erlang/erts-7.1/bin/epmd -daemon
  236 rabbitmq   0:03 /usr/lib/erlang/erts-7.1/bin/beam.smp -W w -A 64 -K true -A128 -P 1048576 -K true -B i -- -root /usr/lib/erlang -progname erl -- -home /var/lib/rabbitmq -epmd_port 4369 -- -pa /usr/lib/rabbitmq/bin/../ebin -noshell -noinput -s rabbit boot -sname rabbit@cappasity-dev -boot start_sasl -config /etc/rabbitmq -kernel inet_default_connect_options [{nodelay,true}] -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_error_logger tty -rabbit error_logger tty -rabbit sasl_error_logger tty -rabbit enabled_plugins_file "/usr/lib/rabbitmq/etc/rabbitmq/enabled_plugins" -rabbit plugins_dir "/usr/lib/rabbitmq/bin/../plugins" -rabbit plugins_expand_dir "/usr/lib/rabbitmq/bin/../var/lib/rabbitmq/mnesia/rabbit@cappasity-dev-plugins-expand" -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir "/var/lib/rabbitmq/mnesia" -kernel inet_dist_listen_min 25672 -kernel inet_dist_listen_max 25672

rabbitmq.config

[
  {kernel, [
    {inet_default_connect_options, [{nodelay, true}]},
    {inet_default_listen_options,  [{nodelay, true}]}
  ]},
  {rabbit, [{default_user, <<"guest">>},
           {default_pass, <<"guest">>},
           {loopback_users, []},
           {cluster_partition_handling, autoheal},
           {delegate_count, 64},
           {fhc_read_buffering, false},
           {fhc_write_buffering, false},
           {heartbeat, 60},
           {queue_index_embed_msgs_below, 0},
           {queue_index_max_journal_entries, 8192},
           {log_levels, [{connection, debug},
                         {channel, debug},
                         {federation, info},
                         {mirroring, info}]},
           {vm_memory_high_watermark, 0.8},
           {frame_max, 32768},
           {hipe_compile, true},
           {tcp_listen_options, [
             {backlog,   128},
             {nodelay,   true},
             {sndbuf,    196608},
             {recbuf,    196608}
           ]}
  ]},
  {rabbitmq_management, [{rates_mode, basic}]}
].

RABBITMQ_ENABLED_PLUGINS_FILE=/usr/lib/rabbitmq/etc/rabbitmq/enabled_plugins
RABBITMQ_LOGS=-
RABBITMQ_NODENAME=rabbit@cappasity-dev
RABBITMQ_CONFIG_FILE=/etc/rabbitmq
RABBITMQ_ERLANG_COOKIE=toughcookie
RABBITMQ_VERSION=3.5.7
RABBITMQ_SASL_LOGS=-
RABBITMQ_PID_FILE=/var/lib/rabbitmq/rabbitmq.pid
RABBITMQ_SERVER_ERL_ARGS=+K true +A128 +P 1048576 -kernel inet_default_connect_options [{nodelay,true}]
RABBITMQ_MNESIA_DIR=/var/lib/rabbitmq/mnesia
RABBITMQ_DIST_PORT=25672

Michael Klishin

unread,
Jan 20, 2016, 11:08:37 AM1/20/16
to rabbitm...@googlegroups.com, Vitaly Aminev
On 20 January 2016 at 18:46:02, Vitaly Aminev (lath...@gmail.com) wrote:
> It opens 2 connections: 1 consumer and 1 publisher. Library used
> is https://github.com/dropbox/amqp-coffee

> Situation is exactly the same on rabbitmq 3.5.7, Lazy queues
> were not existent back then, therefore I don't see how this would
> affect the queue.

I can tell you: queues use a fairly different implementation in the "lazy" mode
and this results in a much more even (less variation) throughput.

Ignore variability when benchmarking at your own peril.

> TCP sniffing is obviously something I haven't done and not going
> to, because other services like redis with it's simplistic and
> useless pubsub perform well, so that it's not a connection issue.
> Time is lost inside RabbitMQ - question is why.

If there is one thing that I've learnt about benchmarking various systems, in particular those
accessible over the network, it is this:
you NEVER know what actually takes time. If you think you do, you are fooling yourself.

Of course TCP settings can have a dramatic impact
on your benchmark.

> Time is lost inside RabbitMQ - question is why. 

You just played down my advice about using lazy queues to have less throughput and latency
variability and now come to this conclusion. Do you really need our help? 

> Furthermore, I've tried tracing messages with firehose and rabbitmq reports that it gets a message and
> immediately dispatches it (at least stamps are exactly the same on the logs). So something is eating tons of ms
> after the message is published to rabbitmq and before it gets into queue.

Yeah, and guessing is a really poor way of determining what that "something" is. Profiling
is a much better strategy and Wireshark is one way of getting cold hard data instead of guesses.

> On top of it - if I enabled confirm mode for messages - latency spikes by exactly 2 times

Publisher confirms add a roundtrip to every message published. Furthermore, depending on the client
library and application code, clients sometimes sit there waiting for a confirmation after
a publish, although I doubt this is the case with Node.js.

If I were you, I'd cut back on strong opinions ("useless pubsub", "obvious something I haven't done and not going to") and use tools
that provide data.
We are not going to waste our time convincing you that data is better than guesses,
sorry.

Michael Klishin

unread,
Jan 20, 2016, 11:17:00 AM1/20/16
to rabbitm...@googlegroups.com
> How do you know this is not a connection issue? RabbitMQ runs 
> inside the Erlang virtual machine which has to resolve hostnames
> and so on for RabbitMQ.

We've seen this happen more than once, sure. However, hostname resolution timeouts
are usually at least 5 seconds, and commonly in 10s of seconds, so I'd investigate other
possible causes first. 

Vitaly Aminev

unread,
Jan 20, 2016, 11:37:10 AM1/20/16
to Michael Klishin, rabbitm...@googlegroups.com

> On Jan 20, 2016, at 7:08 PM, Michael Klishin <mkli...@pivotal.io> wrote:
>
> On 20 January 2016 at 18:46:02, Vitaly Aminev (lath...@gmail.com) wrote:
>> It opens 2 connections: 1 consumer and 1 publisher. Library used
>> is https://github.com/dropbox/amqp-coffee
>
>> Situation is exactly the same on rabbitmq 3.5.7, Lazy queues
>> were not existent back then, therefore I don't see how this would
>> affect the queue.
>
> I can tell you: queues use a fairly different implementation in the "lazy" mode
> and this results in a much more even (less variation) throughput.
>
> Ignore variability when benchmarking at your own peril.

I get that lazy queues have less variability, what I’m saying is this is a new feature and before asking for help I’ve tested many systems and different setups, such as:

local consumer, local rabbitmq, non-clustered setup
local consumer, dockerized rabbitmq (ubuntu and alpine linux) with plugins and without, non-clustered and clustered setups
consumer located on a vm where rabbitmq is running. various virtualizations: virtualbox, KVM, QEMU, cloud providers (google and amazon) with a better CPU and a worse one.

Results are always the same - during 6 seconds that the bench is run we publish 7k messages with a mean roundtrip of ~1ms (that is actually 2 messages: 1 message from publisher to consumer, and 1 from consumer to publisher via direct queue) on bare metal (i.e. no virtualization) except the erlang VM

When the same bench is run on a VM, which is identical to a barebone machine - results spike to 40ms per roundtrip.

>
>> TCP sniffing is obviously something I haven't done and not going
>> to, because other services like redis with it's simplistic and
>> useless pubsub perform well, so that it's not a connection issue.
>> Time is lost inside RabbitMQ - question is why.
>
> If there is one thing that I've learnt about benchmarking various systems, in particular those
> accessible over the network, it is this:
> you NEVER know what actually takes time. If you think you do, you are fooling yourself.
>
> Of course TCP settings can have a dramatic impact
> on your benchmark.
>

Yes, TCP settings can have a dramatic impact and I agree with that. Before coming here I’ve tried tuning all the params I could find in the tutorials on the rabbitmq website: buffers, backlog, naggle. If there is anything else I should try - would be eager to do so and report the differences. I’ve also benchmarked the libs before pointing at rabbitmq. I don’t want to do sniffing, because similar networking systems operate well and with low latency - I don’t get how something in a VM can operate well (low latency) and something can’t. If I’m wrong - I apologize and don’t mean to insult anybody, but let’s look at other possibilities first rather than something happening on the wire.

>> Time is lost inside RabbitMQ - question is why.
>
> You just played down my advice about using lazy queues to have less throughput and latency
> variability and now come to this conclusion. Do you really need our help?
>

It might give less variability and I would do it if nothing else would work. But as I’ve stated - this is a new feature and there are plugins that still work on 3.5.x only so if possible I would want to avoid it

>> Furthermore, I've tried tracing messages with firehose and rabbitmq reports that it gets a message and
>> immediately dispatches it (at least stamps are exactly the same on the logs). So something is eating tons of ms
>> after the message is published to rabbitmq and before it gets into queue.
>
> Yeah, and guessing is a really poor way of determining what that "something" is. Profiling
> is a much better strategy and Wireshark is one way of getting cold hard data instead of guesses.
>

so there is absolutely no other way we can get this data?

>> On top of it - if I enabled confirm mode for messages - latency spikes by exactly 2 times
>
> Publisher confirms add a roundtrip to every message published. Furthermore, depending on the client
> library and application code, clients sometimes sit there waiting for a confirmation after
> a publish, although I doubt this is the case with Node.js.
>

In case of confirmations, I know that the client would wait for basic.ack from the broker and this is basically a source of doubled latency. This is normal, just weird to see it on such a high magnitude. (Channels used for publishing aren’t transactional)

> If I were you, I'd cut back on strong opinions ("useless pubsub", "obvious something I haven't done and not going to") and use tools
> that provide data.

"useless pubsub" - redis pubsub has it’s usecases, just not as strong in it’s functionality as AMQP in general. The word is too strong, but I just want to emphasize that it doesn’t do much inside compared to rabbit and the latency on similar roundtrip is close to zero

> We are not going to waste our time convincing you that data is better than guesses,
> sorry.

I don’t want you to, I only ask for something obvious that I can tune and see if it changes things. Maybe some settings were overlooked, maybe something else. If we can’t find anything then last resort would be to look at tcp dumps, but I really want to try something else first

Michael Klishin

unread,
Jan 20, 2016, 11:54:13 AM1/20/16
to Vitaly Aminev, rabbitm...@googlegroups.com
On 20 January 2016 at 19:37:07, Vitaly Aminev (lath...@gmail.com) wrote:
> so there is absolutely no other way we can get this data?

No way other than what?

We have recommended three things so far:

 * Using lazy queues — you want to stick to 3.5.7, so that doesn't apply.
 * Finding out the number of connections the benchmark opens — we now know it is 2 and
   therefore the time spent opening a new connection is spent just twice, so it's very likely irrelevant.
 * Doing a libpcap capture and inspecting it — you doubt it would reveal anything.

Debugging any issue is a basic decision tree: you form a hypothesis, try to prove or disprove it,
form a new one, narrowing things down step by step.

We could recommend enabling HiPE [1] but that's just a way to reduce latency in general,
not track down why it can be different between 2 or N environments.

We could recommend using strace/dtrace and comparing the results but it's time
consuming to interpret those.

We could recommend various ways to profile RabbitMQ using Erlang profiling tools but it's fairly involved
and there is no real evidence that the time is spent there (however likely you think that is).

Just like a doctor cannot help someone who doesn't want to be treated, we cannot
help someone who shots down and ignores our advice.

> I don’t want you to, I only ask for something obvious that I can
> tune and see if it changes things

There is nothing obvious to me. Virtualisation in general doesn't result in this kind of
latency drops, so the difference must be something else. Which brings us to square one:
the debugging decision tree and forming hypotheses. 

If you are still determined to get to the root of the problem, either try things that folks
on this list recommend, or try something else that you haven't tried before, or seek help elsewhere.
In any case, playing down advice given by the very people you asked for help is not a very
bright idea.

1. https://groups.google.com/forum/#!searchin/rabbitmq-users/HiPE/rabbitmq-users/8lcZ9ArNHNw/521sHscjDwAJ

Vitaly Aminev

unread,
Jan 20, 2016, 6:42:20 PM1/20/16
to rabbitmq-users, lath...@gmail.com
Here is gzipped pcapng


Typical round-trip sequence takes 38 ms: from packet 91 to 98

<img width="1534" alt="screen shot 2016-01-21 at 2 40 14 am" src="https://cloud.githubusercontent.com/assets/1713617/12466851/6a6aa21c-bfe8-11e5-945d-34f3e376e14b.png">


среда, 20 января 2016 г., 19:54:13 UTC+3 пользователь Michael Klishin написал:

Michael Klishin

unread,
Jan 20, 2016, 7:17:34 PM1/20/16
to rabbitm...@googlegroups.com, Vitaly Aminev
On 21 January 2016 at 02:42:23, Vitaly Aminev (lath...@gmail.com) wrote:
> Typical round-trip sequence takes 38 ms: from packet 91 to 98

One fact that wasn't explicitly stated previously: benchmarking clients and RabbitMQ run on
different hosts. I'll assume it's no different in the other environment being compared.

In benchmarking results, there are no "typical" values, you can only derive sound conclusions
from aggregate values (percentiles, standard deviation, etc). But producing this data from
a capture will take a while so lets take a shortcut and take a look at this one sample which
you find "typical". 

Our of the ~ 38ms, between a basic.publish frame going from 99.1 to 99.100 and a TCP ACK received,
36 ms pass. With a few other basic.publish frames, the delay is even higher.

This doesn't convince me that the previously mentioned hypothesis that

> Time is lost inside RabbitMQ - question is why.

necessarily has merit.

Reasoning about TCP stack throughput isn't easy: there are many algorithms involved.
Next thing I'd do is to dump all net.* settings in both environments and see what differences
there can be. On Linux that can be done with

sudo sysctl -a | grep ^net

Doing another capture using a similar benchmarking tool based on TCP (e.g. ping isn't), e.g. PerfTest [1],
can demonstrate if the issue can be in the client-specific or not. That'd be another data
point to work with.

1. http://www.rabbitmq.com/java-tools.html

Vitaly Aminev

unread,
Jan 21, 2016, 9:06:09 AM1/21/16
to rabbitmq-users, lath...@gmail.com
Included dump with the bench performed on VirtualBox (provisioned by vagrant, conf file attached, OS - ubuntu 14.04) with rabbitmq 3.6.0 installed and node.js 5.5.0 for running the bench. This is performed on loopback interface, dump captured with `tshark -i lo -f "tcp port 5672" -w /vagrant_data/vm-dump.pcap`

TCP settings dump:

sudo sysctl -a | grep ^net


net.bridge.bridge-nf-call-arptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-filter-pppoe-tagged = 0

net.bridge.bridge-nf-filter-vlan-tagged = 0

net.bridge.bridge-nf-pass-vlan-input-dev = 0

net.core.bpf_jit_enable = 0

net.core.busy_poll = 0

net.core.busy_read = 0

net.core.default_qdisc = pfifo_fast

net.core.dev_weight = 64

net.core.flow_limit_cpu_bitmap = 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000

net.core.flow_limit_table_len = 4096

net.core.message_burst = 10

net.core.message_cost = 5

net.core.netdev_budget = 300

net.core.netdev_max_backlog = 1000

net.core.netdev_tstamp_prequeue = 1

net.core.optmem_max = 20480

net.core.rmem_default = 212992

net.core.rmem_max = 212992

net.core.rps_sock_flow_entries = 0

net.core.somaxconn = 128

net.core.warnings = 1

net.core.wmem_default = 212992

net.core.wmem_max = 212992

net.core.xfrm_acq_expires = 30

net.core.xfrm_aevent_etime = 10

net.core.xfrm_aevent_rseqth = 2

net.core.xfrm_larval_drop = 1

net.ipv4.cipso_cache_bucket_size = 10

net.ipv4.cipso_cache_enable = 1

net.ipv4.cipso_rbm_optfmt = 0

net.ipv4.cipso_rbm_strictvalid = 1

net.ipv4.conf.all.accept_local = 0

net.ipv4.conf.all.accept_redirects = 0

net.ipv4.conf.all.accept_source_route = 0

net.ipv4.conf.all.arp_accept = 0

net.ipv4.conf.all.arp_announce = 0

net.ipv4.conf.all.arp_filter = 0

net.ipv4.conf.all.arp_ignore = 0

net.ipv4.conf.all.arp_notify = 0

net.ipv4.conf.all.bootp_relay = 0

net.ipv4.conf.all.disable_policy = 0

net.ipv4.conf.all.disable_xfrm = 0

net.ipv4.conf.all.force_igmp_version = 0

net.ipv4.conf.all.forwarding = 1

net.ipv4.conf.all.igmpv2_unsolicited_report_interval = 10000

net.ipv4.conf.all.igmpv3_unsolicited_report_interval = 1000

net.ipv4.conf.all.log_martians = 0

net.ipv4.conf.all.mc_forwarding = 0

net.ipv4.conf.all.medium_id = 0

net.ipv4.conf.all.promote_secondaries = 0

net.ipv4.conf.all.proxy_arp = 0

net.ipv4.conf.all.proxy_arp_pvlan = 0

net.ipv4.conf.all.route_localnet = 0

net.ipv4.conf.all.rp_filter = 1

net.ipv4.conf.all.secure_redirects = 1

net.ipv4.conf.all.send_redirects = 1

net.ipv4.conf.all.shared_media = 1

net.ipv4.conf.all.src_valid_mark = 0

net.ipv4.conf.all.tag = 0

net.ipv4.conf.default.accept_local = 0

net.ipv4.conf.default.accept_redirects = 1

net.ipv4.conf.default.accept_source_route = 1

net.ipv4.conf.default.arp_accept = 0

net.ipv4.conf.default.arp_announce = 0

net.ipv4.conf.default.arp_filter = 0

net.ipv4.conf.default.arp_ignore = 0

net.ipv4.conf.default.arp_notify = 0

net.ipv4.conf.default.bootp_relay = 0

net.ipv4.conf.default.disable_policy = 0

net.ipv4.conf.default.disable_xfrm = 0

net.ipv4.conf.default.force_igmp_version = 0

net.ipv4.conf.default.forwarding = 1

net.ipv4.conf.default.igmpv2_unsolicited_report_interval = 10000

net.ipv4.conf.default.igmpv3_unsolicited_report_interval = 1000

net.ipv4.conf.default.log_martians = 0

net.ipv4.conf.default.mc_forwarding = 0

net.ipv4.conf.default.medium_id = 0

net.ipv4.conf.default.promote_secondaries = 0

net.ipv4.conf.default.proxy_arp = 0

net.ipv4.conf.default.proxy_arp_pvlan = 0

net.ipv4.conf.default.route_localnet = 0

net.ipv4.conf.default.rp_filter = 1

net.ipv4.conf.default.secure_redirects = 1

net.ipv4.conf.default.send_redirects = 1

net.ipv4.conf.default.shared_media = 1

net.ipv4.conf.default.src_valid_mark = 0

net.ipv4.conf.default.tag = 0

net.ipv4.conf.docker0.accept_local = 0

net.ipv4.conf.docker0.accept_redirects = 1

net.ipv4.conf.docker0.accept_source_route = 1

net.ipv4.conf.docker0.arp_accept = 0

net.ipv4.conf.docker0.arp_announce = 0

net.ipv4.conf.docker0.arp_filter = 0

net.ipv4.conf.docker0.arp_ignore = 0

net.ipv4.conf.docker0.arp_notify = 0

net.ipv4.conf.docker0.bootp_relay = 0

net.ipv4.conf.docker0.disable_policy = 0

net.ipv4.conf.docker0.disable_xfrm = 0

net.ipv4.conf.docker0.force_igmp_version = 0

net.ipv4.conf.docker0.forwarding = 1

net.ipv4.conf.docker0.igmpv2_unsolicited_report_interval = 10000

net.ipv4.conf.docker0.igmpv3_unsolicited_report_interval = 1000

net.ipv4.conf.docker0.log_martians = 0

net.ipv4.conf.docker0.mc_forwarding = 0

net.ipv4.conf.docker0.medium_id = 0

net.ipv4.conf.docker0.promote_secondaries = 0

net.ipv4.conf.docker0.proxy_arp = 0

net.ipv4.conf.docker0.proxy_arp_pvlan = 0

net.ipv4.conf.docker0.route_localnet = 0

net.ipv4.conf.docker0.rp_filter = 1

net.ipv4.conf.docker0.secure_redirects = 1

net.ipv4.conf.docker0.send_redirects = 1

net.ipv4.conf.docker0.shared_media = 1

net.ipv4.conf.docker0.src_valid_mark = 0

net.ipv4.conf.docker0.tag = 0

net.ipv4.conf.eth0.accept_local = 0

net.ipv4.conf.eth0.accept_redirects = 1

net.ipv4.conf.eth0.accept_source_route = 1

net.ipv4.conf.eth0.arp_accept = 0

net.ipv4.conf.eth0.arp_announce = 0

net.ipv4.conf.eth0.arp_filter = 0

net.ipv4.conf.eth0.arp_ignore = 0

net.ipv4.conf.eth0.arp_notify = 0

net.ipv4.conf.eth0.bootp_relay = 0

net.ipv4.conf.eth0.disable_policy = 0

net.ipv4.conf.eth0.disable_xfrm = 0

net.ipv4.conf.eth0.force_igmp_version = 0

net.ipv4.conf.eth0.forwarding = 1

net.ipv4.conf.eth0.igmpv2_unsolicited_report_interval = 10000

net.ipv4.conf.eth0.igmpv3_unsolicited_report_interval = 1000

net.ipv4.conf.eth0.log_martians = 0

net.ipv4.conf.eth0.mc_forwarding = 0

net.ipv4.conf.eth0.medium_id = 0

net.ipv4.conf.eth0.promote_secondaries = 0

net.ipv4.conf.eth0.proxy_arp = 0

net.ipv4.conf.eth0.proxy_arp_pvlan = 0

net.ipv4.conf.eth0.route_localnet = 0

net.ipv4.conf.eth0.rp_filter = 1

net.ipv4.conf.eth0.secure_redirects = 1

net.ipv4.conf.eth0.send_redirects = 1

net.ipv4.conf.eth0.shared_media = 1

net.ipv4.conf.eth0.src_valid_mark = 0

net.ipv4.conf.eth0.tag = 0

net.ipv4.conf.eth1.accept_local = 0

net.ipv4.conf.eth1.accept_redirects = 1

net.ipv4.conf.eth1.accept_source_route = 1

net.ipv4.conf.eth1.arp_accept = 0

net.ipv4.conf.eth1.arp_announce = 0

net.ipv4.conf.eth1.arp_filter = 0

net.ipv4.conf.eth1.arp_ignore = 0

net.ipv4.conf.eth1.arp_notify = 0

net.ipv4.conf.eth1.bootp_relay = 0

net.ipv4.conf.eth1.disable_policy = 0

net.ipv4.conf.eth1.disable_xfrm = 0

net.ipv4.conf.eth1.force_igmp_version = 0

net.ipv4.conf.eth1.forwarding = 1

net.ipv4.conf.eth1.igmpv2_unsolicited_report_interval = 10000

net.ipv4.conf.eth1.igmpv3_unsolicited_report_interval = 1000

net.ipv4.conf.eth1.log_martians = 0

net.ipv4.conf.eth1.mc_forwarding = 0

net.ipv4.conf.eth1.medium_id = 0

net.ipv4.conf.eth1.promote_secondaries = 0

net.ipv4.conf.eth1.proxy_arp = 0

net.ipv4.conf.eth1.proxy_arp_pvlan = 0

net.ipv4.conf.eth1.route_localnet = 0

net.ipv4.conf.eth1.rp_filter = 1

net.ipv4.conf.eth1.secure_redirects = 1

net.ipv4.conf.eth1.send_redirects = 1

net.ipv4.conf.eth1.shared_media = 1

net.ipv4.conf.eth1.src_valid_mark = 0

net.ipv4.conf.eth1.tag = 0

net.ipv4.conf.lo.accept_local = 0

net.ipv4.conf.lo.accept_redirects = 1

net.ipv4.conf.lo.accept_source_route = 1

net.ipv4.conf.lo.arp_accept = 0

net.ipv4.conf.lo.arp_announce = 0

net.ipv4.conf.lo.arp_filter = 0

net.ipv4.conf.lo.arp_ignore = 0

net.ipv4.conf.lo.arp_notify = 0

net.ipv4.conf.lo.bootp_relay = 0

net.ipv4.conf.lo.disable_policy = 1

net.ipv4.conf.lo.disable_xfrm = 1

net.ipv4.conf.lo.force_igmp_version = 0

net.ipv4.conf.lo.forwarding = 1

net.ipv4.conf.lo.igmpv2_unsolicited_report_interval = 10000

net.ipv4.conf.lo.igmpv3_unsolicited_report_interval = 1000

net.ipv4.conf.lo.log_martians = 0

net.ipv4.conf.lo.mc_forwarding = 0

net.ipv4.conf.lo.medium_id = 0

net.ipv4.conf.lo.promote_secondaries = 0

net.ipv4.conf.lo.proxy_arp = 0

net.ipv4.conf.lo.proxy_arp_pvlan = 0

net.ipv4.conf.lo.route_localnet = 0

net.ipv4.conf.lo.rp_filter = 1

net.ipv4.conf.lo.secure_redirects = 1

net.ipv4.conf.lo.send_redirects = 1

net.ipv4.conf.lo.shared_media = 1

net.ipv4.conf.lo.src_valid_mark = 0

net.ipv4.conf.lo.tag = 0

net.ipv4.icmp_echo_ignore_all = 0

net.ipv4.icmp_echo_ignore_broadcasts = 1

net.ipv4.icmp_errors_use_inbound_ifaddr = 0

net.ipv4.icmp_ignore_bogus_error_responses = 1

net.ipv4.icmp_ratelimit = 1000

net.ipv4.icmp_ratemask = 6168

net.ipv4.igmp_max_memberships = 20

net.ipv4.igmp_max_msf = 10

net.ipv4.inet_peer_maxttl = 600

net.ipv4.inet_peer_minttl = 120

net.ipv4.inet_peer_threshold = 65664

net.ipv4.ip_default_ttl = 64

net.ipv4.ip_dynaddr = 0

net.ipv4.ip_early_demux = 1

net.ipv4.ip_forward = 1

net.ipv4.ip_local_port_range = 32768 61000

net.ipv4.ip_local_reserved_ports = 

net.ipv4.ip_no_pmtu_disc = 0

net.ipv4.ip_nonlocal_bind = 0

net.ipv4.ipfrag_high_thresh = 4194304

net.ipv4.ipfrag_low_thresh = 3145728

net.ipv4.ipfrag_max_dist = 64

net.ipv4.ipfrag_secret_interval = 600

net.ipv4.ipfrag_time = 30

net.ipv4.neigh.default.anycast_delay = 100

net.ipv4.neigh.default.app_solicit = 0

net.ipv4.neigh.default.base_reachable_time_ms = 30000

net.ipv4.neigh.default.delay_first_probe_time = 5

net.ipv4.neigh.default.gc_interval = 30

net.ipv4.neigh.default.gc_stale_time = 60

net.ipv4.neigh.default.gc_thresh1 = 128

net.ipv4.neigh.default.gc_thresh2 = 512

net.ipv4.neigh.default.gc_thresh3 = 1024

net.ipv4.neigh.default.locktime = 100

net.ipv4.neigh.default.mcast_solicit = 3

net.ipv4.neigh.default.proxy_delay = 80

net.ipv4.neigh.default.proxy_qlen = 64

net.ipv4.neigh.default.retrans_time_ms = 1000

net.ipv4.neigh.default.ucast_solicit = 3

net.ipv4.neigh.default.unres_qlen = 31

net.ipv4.neigh.default.unres_qlen_bytes = 65536

net.ipv4.neigh.docker0.anycast_delay = 100

net.ipv4.neigh.docker0.app_solicit = 0

net.ipv4.neigh.docker0.base_reachable_time_ms = 30000

net.ipv4.neigh.docker0.delay_first_probe_time = 5

net.ipv4.neigh.docker0.gc_stale_time = 60

net.ipv4.neigh.docker0.locktime = 100

net.ipv4.neigh.docker0.mcast_solicit = 3

net.ipv4.neigh.docker0.proxy_delay = 80

net.ipv4.neigh.docker0.proxy_qlen = 64

net.ipv4.neigh.docker0.retrans_time_ms = 1000

net.ipv4.neigh.docker0.ucast_solicit = 3

net.ipv4.neigh.docker0.unres_qlen = 31

net.ipv4.neigh.docker0.unres_qlen_bytes = 65536

net.ipv4.neigh.eth0.anycast_delay = 100

net.ipv4.neigh.eth0.app_solicit = 0

net.ipv4.neigh.eth0.base_reachable_time_ms = 30000

net.ipv4.neigh.eth0.delay_first_probe_time = 5

net.ipv4.neigh.eth0.gc_stale_time = 60

net.ipv4.neigh.eth0.locktime = 100

net.ipv4.neigh.eth0.mcast_solicit = 3

net.ipv4.neigh.eth0.proxy_delay = 80

net.ipv4.neigh.eth0.proxy_qlen = 64

net.ipv4.neigh.eth0.retrans_time_ms = 1000

net.ipv4.neigh.eth0.ucast_solicit = 3

net.ipv4.neigh.eth0.unres_qlen = 31

net.ipv4.neigh.eth0.unres_qlen_bytes = 65536

net.ipv4.neigh.eth1.anycast_delay = 100

net.ipv4.neigh.eth1.app_solicit = 0

net.ipv4.neigh.eth1.base_reachable_time_ms = 30000

net.ipv4.neigh.eth1.delay_first_probe_time = 5

net.ipv4.neigh.eth1.gc_stale_time = 60

net.ipv4.neigh.eth1.locktime = 100

net.ipv4.neigh.eth1.mcast_solicit = 3

net.ipv4.neigh.eth1.proxy_delay = 80

net.ipv4.neigh.eth1.proxy_qlen = 64

net.ipv4.neigh.eth1.retrans_time_ms = 1000

net.ipv4.neigh.eth1.ucast_solicit = 3

net.ipv4.neigh.eth1.unres_qlen = 31

net.ipv4.neigh.eth1.unres_qlen_bytes = 65536

net.ipv4.neigh.lo.anycast_delay = 100

net.ipv4.neigh.lo.app_solicit = 0

net.ipv4.neigh.lo.base_reachable_time_ms = 30000

net.ipv4.neigh.lo.delay_first_probe_time = 5

net.ipv4.neigh.lo.gc_stale_time = 60

net.ipv4.neigh.lo.locktime = 100

net.ipv4.neigh.lo.mcast_solicit = 3

net.ipv4.neigh.lo.proxy_delay = 80

net.ipv4.neigh.lo.proxy_qlen = 64

net.ipv4.neigh.lo.retrans_time_ms = 1000

net.ipv4.neigh.lo.ucast_solicit = 3

net.ipv4.neigh.lo.unres_qlen = 31

net.ipv4.neigh.lo.unres_qlen_bytes = 65536

net.ipv4.ping_group_range = 1 0

net.ipv4.route.error_burst = 1250

net.ipv4.route.error_cost = 250

net.ipv4.route.gc_elasticity = 8

net.ipv4.route.gc_interval = 60

net.ipv4.route.gc_min_interval = 0

net.ipv4.route.gc_min_interval_ms = 500

net.ipv4.route.gc_thresh = -1

net.ipv4.route.gc_timeout = 300

net.ipv4.route.max_size = 2147483647

net.ipv4.route.min_adv_mss = 256

net.ipv4.route.min_pmtu = 552

net.ipv4.route.mtu_expires = 600

net.ipv4.route.redirect_load = 5

net.ipv4.route.redirect_number = 9

net.ipv4.route.redirect_silence = 5120

net.ipv4.tcp_abort_on_overflow = 0

net.ipv4.tcp_adv_win_scale = 1

net.ipv4.tcp_allowed_congestion_control = cubic reno

net.ipv4.tcp_app_win = 31

net.ipv4.tcp_available_congestion_control = cubic reno

net.ipv4.tcp_base_mss = 512

net.ipv4.tcp_challenge_ack_limit = 100

net.ipv4.tcp_congestion_control = cubic

net.ipv4.tcp_dsack = 1

net.ipv4.tcp_early_retrans = 3

net.ipv4.tcp_ecn = 2

net.ipv4.tcp_fack = 1

net.ipv4.tcp_fastopen = 1

net.ipv4.tcp_fastopen_key = 00000000-00000000-00000000-00000000

net.ipv4.tcp_fin_timeout = 60

net.ipv4.tcp_frto = 2

net.ipv4.tcp_keepalive_intvl = 75

net.ipv4.tcp_keepalive_probes = 9

net.ipv4.tcp_keepalive_time = 7200

net.ipv4.tcp_limit_output_bytes = 131072

net.ipv4.tcp_low_latency = 0

net.ipv4.tcp_max_orphans = 8192

net.ipv4.tcp_max_syn_backlog = 128

net.ipv4.tcp_max_tw_buckets = 8192

net.ipv4.tcp_mem = 47412 63217 94824

net.ipv4.tcp_min_tso_segs = 2

net.ipv4.tcp_moderate_rcvbuf = 1

net.ipv4.tcp_mtu_probing = 0

net.ipv4.tcp_no_metrics_save = 0

net.ipv4.tcp_notsent_lowat = -1

net.ipv4.tcp_orphan_retries = 0

net.ipv4.tcp_reordering = 3

net.ipv4.tcp_retrans_collapse = 1

net.ipv4.tcp_retries1 = 3

net.ipv4.tcp_retries2 = 15

net.ipv4.tcp_rfc1337 = 0

net.ipv4.tcp_rmem = 4096 87380 6291456

net.ipv4.tcp_sack = 1

net.ipv4.tcp_slow_start_after_idle = 1

net.ipv4.tcp_stdurg = 0

net.ipv4.tcp_syn_retries = 6

net.ipv4.tcp_synack_retries = 5

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_thin_dupack = 0

net.ipv4.tcp_thin_linear_timeouts = 0

net.ipv4.tcp_timestamps = 1

net.ipv4.tcp_tso_win_divisor = 3

net.ipv4.tcp_tw_recycle = 0

net.ipv4.tcp_tw_reuse = 0

net.ipv4.tcp_window_scaling = 1

net.ipv4.tcp_wmem = 4096 16384 4194304

net.ipv4.tcp_workaround_signed_windows = 0

net.ipv4.udp_mem = 47412 63217 94824

net.ipv4.udp_rmem_min = 4096

net.ipv4.udp_wmem_min = 4096

net.ipv4.xfrm4_gc_thresh = 32768

net.ipv6.bindv6only = 0

net.ipv6.conf.all.accept_dad = 1

net.ipv6.conf.all.accept_ra = 1

net.ipv6.conf.all.accept_ra_defrtr = 1

net.ipv6.conf.all.accept_ra_pinfo = 1

net.ipv6.conf.all.accept_ra_rt_info_max_plen = 0

net.ipv6.conf.all.accept_ra_rtr_pref = 1

net.ipv6.conf.all.accept_redirects = 1

net.ipv6.conf.all.accept_source_route = 0

net.ipv6.conf.all.autoconf = 1

net.ipv6.conf.all.dad_transmits = 1

net.ipv6.conf.all.disable_ipv6 = 0

net.ipv6.conf.all.force_mld_version = 0

net.ipv6.conf.all.force_tllao = 0

net.ipv6.conf.all.forwarding = 0

net.ipv6.conf.all.hop_limit = 64

net.ipv6.conf.all.max_addresses = 16

net.ipv6.conf.all.max_desync_factor = 600

net.ipv6.conf.all.mc_forwarding = 0

net.ipv6.conf.all.mldv1_unsolicited_report_interval = 10000

net.ipv6.conf.all.mldv2_unsolicited_report_interval = 1000

net.ipv6.conf.all.mtu = 1280

net.ipv6.conf.all.ndisc_notify = 0

net.ipv6.conf.all.proxy_ndp = 0

net.ipv6.conf.all.regen_max_retry = 3

net.ipv6.conf.all.router_probe_interval = 60

net.ipv6.conf.all.router_solicitation_delay = 1

net.ipv6.conf.all.router_solicitation_interval = 4

net.ipv6.conf.all.router_solicitations = 3

net.ipv6.conf.all.suppress_frag_ndisc = 1

net.ipv6.conf.all.temp_prefered_lft = 86400

net.ipv6.conf.all.temp_valid_lft = 604800

net.ipv6.conf.all.use_tempaddr = 2

net.ipv6.conf.default.accept_dad = 1

net.ipv6.conf.default.accept_ra = 1

net.ipv6.conf.default.accept_ra_defrtr = 1

net.ipv6.conf.default.accept_ra_pinfo = 1

net.ipv6.conf.default.accept_ra_rt_info_max_plen = 0

net.ipv6.conf.default.accept_ra_rtr_pref = 1

net.ipv6.conf.default.accept_redirects = 1

net.ipv6.conf.default.accept_source_route = 0

net.ipv6.conf.default.autoconf = 1

net.ipv6.conf.default.dad_transmits = 1

net.ipv6.conf.default.disable_ipv6 = 0

net.ipv6.conf.default.force_mld_version = 0

net.ipv6.conf.default.force_tllao = 0

net.ipv6.conf.default.forwarding = 0

net.ipv6.conf.default.hop_limit = 64

net.ipv6.conf.default.max_addresses = 16

net.ipv6.conf.default.max_desync_factor = 600

net.ipv6.conf.default.mc_forwarding = 0

net.ipv6.conf.default.mldv1_unsolicited_report_interval = 10000

net.ipv6.conf.default.mldv2_unsolicited_report_interval = 1000

net.ipv6.conf.default.mtu = 1280

net.ipv6.conf.default.ndisc_notify = 0

net.ipv6.conf.default.proxy_ndp = 0

net.ipv6.conf.default.regen_max_retry = 3

net.ipv6.conf.default.router_probe_interval = 60

net.ipv6.conf.default.router_solicitation_delay = 1

net.ipv6.conf.default.router_solicitation_interval = 4

net.ipv6.conf.default.router_solicitations = 3

net.ipv6.conf.default.suppress_frag_ndisc = 1

net.ipv6.conf.default.temp_prefered_lft = 86400

net.ipv6.conf.default.temp_valid_lft = 604800

net.ipv6.conf.default.use_tempaddr = 2

net.ipv6.conf.docker0.accept_dad = 1

net.ipv6.conf.docker0.accept_ra = 1

net.ipv6.conf.docker0.accept_ra_defrtr = 1

net.ipv6.conf.docker0.accept_ra_pinfo = 1

net.ipv6.conf.docker0.accept_ra_rt_info_max_plen = 0

net.ipv6.conf.docker0.accept_ra_rtr_pref = 1

net.ipv6.conf.docker0.accept_redirects = 1

net.ipv6.conf.docker0.accept_source_route = 0

net.ipv6.conf.docker0.autoconf = 1

net.ipv6.conf.docker0.dad_transmits = 1

net.ipv6.conf.docker0.disable_ipv6 = 0

net.ipv6.conf.docker0.force_mld_version = 0

net.ipv6.conf.docker0.force_tllao = 0

net.ipv6.conf.docker0.forwarding = 0

net.ipv6.conf.docker0.hop_limit = 64

net.ipv6.conf.docker0.max_addresses = 16

net.ipv6.conf.docker0.max_desync_factor = 600

net.ipv6.conf.docker0.mc_forwarding = 0

net.ipv6.conf.docker0.mldv1_unsolicited_report_interval = 10000

net.ipv6.conf.docker0.mldv2_unsolicited_report_interval = 1000

net.ipv6.conf.docker0.mtu = 1500

net.ipv6.conf.docker0.ndisc_notify = 0

net.ipv6.conf.docker0.proxy_ndp = 0

net.ipv6.conf.docker0.regen_max_retry = 3

net.ipv6.conf.docker0.router_probe_interval = 60

net.ipv6.conf.docker0.router_solicitation_delay = 1

net.ipv6.conf.docker0.router_solicitation_interval = 4

net.ipv6.conf.docker0.router_solicitations = 3

net.ipv6.conf.docker0.suppress_frag_ndisc = 1

net.ipv6.conf.docker0.temp_prefered_lft = 86400

net.ipv6.conf.docker0.temp_valid_lft = 604800

net.ipv6.conf.docker0.use_tempaddr = 2

net.ipv6.conf.eth0.accept_dad = 1

net.ipv6.conf.eth0.accept_ra = 1

net.ipv6.conf.eth0.accept_ra_defrtr = 1

net.ipv6.conf.eth0.accept_ra_pinfo = 1

net.ipv6.conf.eth0.accept_ra_rt_info_max_plen = 0

net.ipv6.conf.eth0.accept_ra_rtr_pref = 1

net.ipv6.conf.eth0.accept_redirects = 1

net.ipv6.conf.eth0.accept_source_route = 0

net.ipv6.conf.eth0.autoconf = 1

net.ipv6.conf.eth0.dad_transmits = 1

net.ipv6.conf.eth0.disable_ipv6 = 0

net.ipv6.conf.eth0.force_mld_version = 0

net.ipv6.conf.eth0.force_tllao = 0

net.ipv6.conf.eth0.forwarding = 0

net.ipv6.conf.eth0.hop_limit = 64

net.ipv6.conf.eth0.max_addresses = 16

net.ipv6.conf.eth0.max_desync_factor = 600

net.ipv6.conf.eth0.mc_forwarding = 0

net.ipv6.conf.eth0.mldv1_unsolicited_report_interval = 10000

net.ipv6.conf.eth0.mldv2_unsolicited_report_interval = 1000

net.ipv6.conf.eth0.mtu = 1500

net.ipv6.conf.eth0.ndisc_notify = 0

net.ipv6.conf.eth0.proxy_ndp = 0

net.ipv6.conf.eth0.regen_max_retry = 3

net.ipv6.conf.eth0.router_probe_interval = 60

net.ipv6.conf.eth0.router_solicitation_delay = 1

net.ipv6.conf.eth0.router_solicitation_interval = 4

net.ipv6.conf.eth0.router_solicitations = 3

net.ipv6.conf.eth0.suppress_frag_ndisc = 1

net.ipv6.conf.eth0.temp_prefered_lft = 86400

net.ipv6.conf.eth0.temp_valid_lft = 604800

net.ipv6.conf.eth0.use_tempaddr = 2

net.ipv6.conf.eth1.accept_dad = 1

net.ipv6.conf.eth1.accept_ra = 1

net.ipv6.conf.eth1.accept_ra_defrtr = 1

net.ipv6.conf.eth1.accept_ra_pinfo = 1

net.ipv6.conf.eth1.accept_ra_rt_info_max_plen = 0

net.ipv6.conf.eth1.accept_ra_rtr_pref = 1

net.ipv6.conf.eth1.accept_redirects = 1

net.ipv6.conf.eth1.accept_source_route = 0

net.ipv6.conf.eth1.autoconf = 1

net.ipv6.conf.eth1.dad_transmits = 1

net.ipv6.conf.eth1.disable_ipv6 = 0

net.ipv6.conf.eth1.force_mld_version = 0

net.ipv6.conf.eth1.force_tllao = 0

net.ipv6.conf.eth1.forwarding = 0

net.ipv6.conf.eth1.hop_limit = 64

net.ipv6.conf.eth1.max_addresses = 16

net.ipv6.conf.eth1.max_desync_factor = 600

net.ipv6.conf.eth1.mc_forwarding = 0

net.ipv6.conf.eth1.mldv1_unsolicited_report_interval = 10000

net.ipv6.conf.eth1.mldv2_unsolicited_report_interval = 1000

net.ipv6.conf.eth1.mtu = 1500

net.ipv6.conf.eth1.ndisc_notify = 0

net.ipv6.conf.eth1.proxy_ndp = 0

net.ipv6.conf.eth1.regen_max_retry = 3

net.ipv6.conf.eth1.router_probe_interval = 60

net.ipv6.conf.eth1.router_solicitation_delay = 1

net.ipv6.conf.eth1.router_solicitation_interval = 4

net.ipv6.conf.eth1.router_solicitations = 3

net.ipv6.conf.eth1.suppress_frag_ndisc = 1

net.ipv6.conf.eth1.temp_prefered_lft = 86400

net.ipv6.conf.eth1.temp_valid_lft = 604800

net.ipv6.conf.eth1.use_tempaddr = 2

net.ipv6.conf.lo.accept_dad = -1

net.ipv6.conf.lo.accept_ra = 1

net.ipv6.conf.lo.accept_ra_defrtr = 1

net.ipv6.conf.lo.accept_ra_pinfo = 1

net.ipv6.conf.lo.accept_ra_rt_info_max_plen = 0

net.ipv6.conf.lo.accept_ra_rtr_pref = 1

net.ipv6.conf.lo.accept_redirects = 1

net.ipv6.conf.lo.accept_source_route = 0

net.ipv6.conf.lo.autoconf = 1

net.ipv6.conf.lo.dad_transmits = 1

net.ipv6.conf.lo.disable_ipv6 = 0

net.ipv6.conf.lo.force_mld_version = 0

net.ipv6.conf.lo.force_tllao = 0

net.ipv6.conf.lo.forwarding = 0

net.ipv6.conf.lo.hop_limit = 64

net.ipv6.conf.lo.max_addresses = 16

net.ipv6.conf.lo.max_desync_factor = 600

net.ipv6.conf.lo.mc_forwarding = 0

net.ipv6.conf.lo.mldv1_unsolicited_report_interval = 10000

net.ipv6.conf.lo.mldv2_unsolicited_report_interval = 1000

net.ipv6.conf.lo.mtu = 65536

net.ipv6.conf.lo.ndisc_notify = 0

net.ipv6.conf.lo.proxy_ndp = 0

net.ipv6.conf.lo.regen_max_retry = 3

net.ipv6.conf.lo.router_probe_interval = 60

net.ipv6.conf.lo.router_solicitation_delay = 1

net.ipv6.conf.lo.router_solicitation_interval = 4

net.ipv6.conf.lo.router_solicitations = 3

net.ipv6.conf.lo.suppress_frag_ndisc = 1

net.ipv6.conf.lo.temp_prefered_lft = 86400

net.ipv6.conf.lo.temp_valid_lft = 604800

net.ipv6.conf.lo.use_tempaddr = 2

net.ipv6.icmp.ratelimit = 1000

net.ipv6.ip6frag_high_thresh = 4194304

net.ipv6.ip6frag_low_thresh = 3145728

net.ipv6.ip6frag_secret_interval = 600

net.ipv6.ip6frag_time = 60

net.ipv6.mld_max_msf = 64

net.ipv6.neigh.default.anycast_delay = 100

net.ipv6.neigh.default.app_solicit = 0

net.ipv6.neigh.default.base_reachable_time_ms = 30000

net.ipv6.neigh.default.delay_first_probe_time = 5

net.ipv6.neigh.default.gc_interval = 30

net.ipv6.neigh.default.gc_stale_time = 60

net.ipv6.neigh.default.gc_thresh1 = 128

net.ipv6.neigh.default.gc_thresh2 = 512

net.ipv6.neigh.default.gc_thresh3 = 1024

net.ipv6.neigh.default.locktime = 0

net.ipv6.neigh.default.mcast_solicit = 3

net.ipv6.neigh.default.proxy_delay = 80

net.ipv6.neigh.default.proxy_qlen = 64

net.ipv6.neigh.default.retrans_time_ms = 1000

net.ipv6.neigh.default.ucast_solicit = 3

net.ipv6.neigh.default.unres_qlen = 31

net.ipv6.neigh.default.unres_qlen_bytes = 65536

net.ipv6.neigh.docker0.anycast_delay = 100

net.ipv6.neigh.docker0.app_solicit = 0

net.ipv6.neigh.docker0.base_reachable_time_ms = 30000

net.ipv6.neigh.docker0.delay_first_probe_time = 5

net.ipv6.neigh.docker0.gc_stale_time = 60

net.ipv6.neigh.docker0.locktime = 0

net.ipv6.neigh.docker0.mcast_solicit = 3

net.ipv6.neigh.docker0.proxy_delay = 80

net.ipv6.neigh.docker0.proxy_qlen = 64

net.ipv6.neigh.docker0.retrans_time_ms = 1000

net.ipv6.neigh.docker0.ucast_solicit = 3

net.ipv6.neigh.docker0.unres_qlen = 31

net.ipv6.neigh.docker0.unres_qlen_bytes = 65536

net.ipv6.neigh.eth0.anycast_delay = 100

net.ipv6.neigh.eth0.app_solicit = 0

net.ipv6.neigh.eth0.base_reachable_time_ms = 30000

net.ipv6.neigh.eth0.delay_first_probe_time = 5

net.ipv6.neigh.eth0.gc_stale_time = 60

net.ipv6.neigh.eth0.locktime = 0

net.ipv6.neigh.eth0.mcast_solicit = 3

net.ipv6.neigh.eth0.proxy_delay = 80

net.ipv6.neigh.eth0.proxy_qlen = 64

net.ipv6.neigh.eth0.retrans_time_ms = 1000

net.ipv6.neigh.eth0.ucast_solicit = 3

net.ipv6.neigh.eth0.unres_qlen = 31

net.ipv6.neigh.eth0.unres_qlen_bytes = 65536

net.ipv6.neigh.eth1.anycast_delay = 100

net.ipv6.neigh.eth1.app_solicit = 0

net.ipv6.neigh.eth1.base_reachable_time_ms = 30000

net.ipv6.neigh.eth1.delay_first_probe_time = 5

net.ipv6.neigh.eth1.gc_stale_time = 60

net.ipv6.neigh.eth1.locktime = 0

net.ipv6.neigh.eth1.mcast_solicit = 3

net.ipv6.neigh.eth1.proxy_delay = 80

net.ipv6.neigh.eth1.proxy_qlen = 64

net.ipv6.neigh.eth1.retrans_time_ms = 1000

net.ipv6.neigh.eth1.ucast_solicit = 3

net.ipv6.neigh.eth1.unres_qlen = 31

net.ipv6.neigh.eth1.unres_qlen_bytes = 65536

net.ipv6.neigh.lo.anycast_delay = 100

net.ipv6.neigh.lo.app_solicit = 0

net.ipv6.neigh.lo.base_reachable_time_ms = 30000

net.ipv6.neigh.lo.delay_first_probe_time = 5

net.ipv6.neigh.lo.gc_stale_time = 60

net.ipv6.neigh.lo.locktime = 0

net.ipv6.neigh.lo.mcast_solicit = 3

net.ipv6.neigh.lo.proxy_delay = 80

net.ipv6.neigh.lo.proxy_qlen = 64

net.ipv6.neigh.lo.retrans_time_ms = 1000

net.ipv6.neigh.lo.ucast_solicit = 3

net.ipv6.neigh.lo.unres_qlen = 31

net.ipv6.neigh.lo.unres_qlen_bytes = 65536

net.ipv6.route.gc_elasticity = 9

net.ipv6.route.gc_interval = 30

net.ipv6.route.gc_min_interval = 0

net.ipv6.route.gc_min_interval_ms = 500

net.ipv6.route.gc_thresh = 1024

net.ipv6.route.gc_timeout = 60

net.ipv6.route.max_size = 4096

net.ipv6.route.min_adv_mss = 1220

net.ipv6.route.mtu_expires = 600

net.ipv6.xfrm6_gc_thresh = 32768

net.netfilter.nf_conntrack_acct = 0

net.netfilter.nf_conntrack_buckets = 16384

net.netfilter.nf_conntrack_checksum = 1

net.netfilter.nf_conntrack_count = 7

net.netfilter.nf_conntrack_events = 1

net.netfilter.nf_conntrack_events_retry_timeout = 15

net.netfilter.nf_conntrack_expect_max = 256

net.netfilter.nf_conntrack_generic_timeout = 600

net.netfilter.nf_conntrack_helper = 1

net.netfilter.nf_conntrack_icmp_timeout = 30

net.netfilter.nf_conntrack_log_invalid = 0

net.netfilter.nf_conntrack_max = 65536

net.netfilter.nf_conntrack_tcp_be_liberal = 0

net.netfilter.nf_conntrack_tcp_loose = 1

net.netfilter.nf_conntrack_tcp_max_retrans = 3

net.netfilter.nf_conntrack_tcp_timeout_close = 10

net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60

net.netfilter.nf_conntrack_tcp_timeout_established = 432000

net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120

net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30

net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300

net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60

net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120

net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120

net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300

net.netfilter.nf_conntrack_timestamp = 0

net.netfilter.nf_conntrack_udp_timeout = 30

net.netfilter.nf_conntrack_udp_timeout_stream = 180

net.netfilter.nf_log.0 = NONE

net.netfilter.nf_log.1 = NONE

net.netfilter.nf_log.10 = NONE

net.netfilter.nf_log.11 = NONE

net.netfilter.nf_log.12 = NONE

net.netfilter.nf_log.2 = nfnetlink_log

net.netfilter.nf_log.3 = NONE

net.netfilter.nf_log.4 = NONE

net.netfilter.nf_log.5 = NONE

net.netfilter.nf_log.6 = NONE

net.netfilter.nf_log.7 = NONE

net.netfilter.nf_log.8 = NONE

net.netfilter.nf_log.9 = NONE

net.nf_conntrack_max = 65536

net.unix.max_dgram_qlen = 10




четверг, 21 января 2016 г., 3:17:34 UTC+3 пользователь Michael Klishin написал:
vm-dump.pcap
Vagrantfile
rabbitmq.conf

Michael Klishin

unread,
Jan 21, 2016, 9:48:56 AM1/21/16
to rabbitm...@googlegroups.com, Vitaly Aminev
On 21 January 2016 at 17:06:12, Vitaly Aminev (lath...@gmail.com) wrote:
> Included dump with the bench performed on VirtualBox (provisioned
> by vagrant, conf file attached, OS - ubuntu 14.04) with rabbitmq
> 3.6.0 installed and node.js 5.5.0 for running the bench

In this dump, the latency between basic.publish frames sent by the client and a TCP ACK sent
back fluctuates between < 1 ms and 39 ms, with surprisingly few values in between.
This can be due to client throttling by what we called internal flow control, to prevent
the unbounded buffers problem.

Internal flow control has been discussed
on this list in the past but here's a brief version: when a single process in the chain

[socket -> ] connection/parser -> channel -> queue -> consumer channel [-> consumer socket]

cannot keep up with its upstream, it asks it to not send anything for a moment. That leads to RabbitMQ
not reading from the publisher's socket for some time, leading to variability in message rates (primarily
throughput but also latency).

With "lazy queues" (the name isn't particularly descriptive), internal flow control kicks in less frequently
and the variability is lower as a result.

However, all of this should be just as true for bare metal.
 So, what does a bare metal environment capture look like?

Vitaly Aminev

unread,
Jan 21, 2016, 10:57:33 AM1/21/16
to rabbitmq-users, lath...@gmail.com
Localhost dump attached and sysctl along with it



четверг, 21 января 2016 г., 17:48:56 UTC+3 пользователь Michael Klishin написал:
barebone-dump.pcap.gz

Vitaly Aminev

unread,
Jan 21, 2016, 1:56:20 PM1/21/16
to rabbitmq-users
Also tried with lazy queues - makes 0 difference

четверг, 21 января 2016 г., 18:57:33 UTC+3 пользователь Vitaly Aminev написал:

Michael Klishin

unread,
Jan 21, 2016, 2:01:19 PM1/21/16
to rabbitm...@googlegroups.com, Vitaly Aminev
On 21 January 2016 at 18:57:40, Vitaly Aminev (lath...@gmail.com) wrote:
> Localhost dump attached and sysctl along with it

By localhost do you mean a physical machine?

Vitaly Aminev

unread,
Jan 21, 2016, 2:29:36 PM1/21/16
to Michael Klishin, rabbitm...@googlegroups.com
Yes, I mean this is my physical machine.
Reply all
Reply to author
Forward
0 new messages