Understanding metrics

2,533 views
Skip to first unread message

Daniel Hobson

unread,
Sep 21, 2017, 10:08:42 AM9/21/17
to etcd-dev
Hi all,

We're implementing an etcd cluster and we're getting an unhealthy state more often than we'd like so after some reading we've moved the data dir to a RAM disk (after taking into account the risk), things were fine for 5 days but now we've had another couple of unhealthy states.

Previous (before the RAM disk) I was reading into the area of fsync_durations and their knock on effects on heartbeats and cluster states. I couldn't find any good information to explain it though, so I put it to you good people here.

Here is the fsync duration from on of our etcd nodes:

etcd_disk_wal_fsync_duration_seconds_bucket{le="0.001"} 438339
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.002"} 438340
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.004"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.008"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.016"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.032"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.064"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.128"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.256"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.512"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="1.024"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="2.048"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="4.096"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="8.192"} 438341
etcd_disk_wal_fsync_duration_seconds_bucket{le="+Inf"} 438341
etcd_disk_wal_fsync_duration_seconds_sum 1.3414911559999816
etcd_disk_wal_fsync_duration_seconds_count 438341

Is there something here we should be looking at specifically? All of the values are much higher than they were are the start, is this to be expected? What else should I be looking at?

Is there any links to some good documentation on what each stat means?

Thanks, Daniel.

Gyu-Ho Lee

unread,
Sep 21, 2017, 8:02:17 PM9/21/17
to etcd-dev

Daniel Hobson

unread,
Sep 22, 2017, 7:57:36 AM9/22/17
to etcd-dev
Hi,

Thank you very much, that was really helpful.


I installed prometheus and grafana and noticed I'd missed a RAM disk on one of the nodes which was causing some spikes in disk usage. I corrected that and the results (according to the grafana legend) look much better:



This is for etcd_disk_wal_fsync_duration_seconds_bucket and etcd_disk_backend_commit_duration_seconds_bucket.


If I get the metrics directly from each etcd instance though it doesn't look quite the same, am I misreading something? 

These are from the *ps01 node for a reference of what I mean:

etcd_disk_wal_fsync_duration_seconds_bucket{le="0.001"} 599311
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.002"} 599313
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.004"} 599314
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.008"} 599315
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.016"} 599315
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.032"} 599315
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.064"} 599315
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.128"} 599315
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.256"} 599315
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.512"} 599315
etcd_disk_wal_fsync_duration_seconds_bucket{le="1.024"} 599315
etcd_disk_wal_fsync_duration_seconds_bucket{le="2.048"} 599315
etcd_disk_wal_fsync_duration_seconds_bucket{le="4.096"} 599315
etcd_disk_wal_fsync_duration_seconds_bucket{le="8.192"} 599315
etcd_disk_wal_fsync_duration_seconds_bucket{le="+Inf"} 599315
etcd_disk_wal_fsync_duration_seconds_sum 1.8174891619999545
etcd_disk_wal_fsync_duration_seconds_count 599315

etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.01"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.02"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.04"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.08"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.16"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.32"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.64"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="1.28"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="2.56"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="5.12"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="10.24"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="20.48"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="40.96"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="81.92"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="163.84"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="327.68"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="655.36"} 1
etcd_disk_backend_snapshot_duration_seconds_bucket{le="+Inf"} 1
etcd_disk_backend_snapshot_duration_seconds_sum 0.018794481
etcd_disk_backend_snapshot_duration_seconds_count 1

Is the etcd_disk_wal_fsync_duration_seconds_sum  definitely showing 1.8 seconds and not using a different time scale (e.g. milliseconds?), I know it says seconds in the metric name but it doesn't make sense to me.

Thanks, Daniel.

Gyu-Ho Lee

unread,
Sep 22, 2017, 8:35:12 PM9/22/17
to etcd-dev
It's in seconds, as far as I know. Metrics shows that fsyncs taking >8secs,
which is unusual. Are you using HDD?

Daniel Hobson

unread,
Sep 22, 2017, 9:55:41 PM9/22/17
to etcd-dev
No, I've got a RAM disk setup on each node which the -data-dir is pointing at (no separate -wal-dir) so I would expect everything to be very quick. The 990 microsecond write times in the screenshot make sense to me based on this.

Is there another metric I can check?

Where do you get the 8 seconds from? If the fsyncs we taking longer than 8 seconds I would definitely expect warnings in the logs and I don't see any.

Just for info, the other non-default flags I'm using are: –snapshot-count 5000, –max-snapshots 3, –max-wals 3.

Thanks, Daniel.

Gyu-Ho Lee

unread,
Sep 25, 2017, 11:56:05 AM9/25/17
to etcd-dev
If etcd becomes slow, you should see warnings in etcdserver apply routine.

The metrics above shows that most taking >8 seconds.
See https://prometheus.io/docs/practices/histograms for how to interpret Prometheus metrics.

Can you share the full output of /metrics?

Daniel Hobson

unread,
Sep 25, 2017, 12:11:08 PM9/25/17
to etcd-dev

I've restarted the nodes over the weekend whilst I was doing something else so the metrics aren't the same as last week however here's the current output from the same node, it's been up for 44 hours:


Whilst I remember, the etcd nodes are running in docker containers, I know that shouldn't make a difference but maybe there's a side effect.


# HELP etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds Bucketed histogram of db compaction pause duration.
# TYPE etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds histogram
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="1"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="2"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="4"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="8"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="16"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="32"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="64"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="128"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="256"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="512"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="1024"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="2048"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="4096"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_bucket{le="+Inf"} 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_sum 0
etcd_debugging_mvcc_db_compaction_pause_duration_milliseconds_count 0
# HELP etcd_debugging_mvcc_db_compaction_total_duration_milliseconds Bucketed histogram of db compaction total duration.
# TYPE etcd_debugging_mvcc_db_compaction_total_duration_milliseconds histogram
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="100"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="200"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="400"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="800"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="1600"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="3200"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="6400"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="12800"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="25600"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="51200"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="102400"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="204800"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="409600"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="819200"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_bucket{le="+Inf"} 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_sum 0
etcd_debugging_mvcc_db_compaction_total_duration_milliseconds_count 0
# HELP etcd_debugging_mvcc_db_total_size_in_bytes Total size of the underlying database in bytes.
# TYPE etcd_debugging_mvcc_db_total_size_in_bytes gauge
etcd_debugging_mvcc_db_total_size_in_bytes 24576
# HELP etcd_debugging_mvcc_delete_total Total number of deletes seen by this member.
# TYPE etcd_debugging_mvcc_delete_total counter
etcd_debugging_mvcc_delete_total 0
# HELP etcd_debugging_mvcc_events_total Total number of events sent by this member.
# TYPE etcd_debugging_mvcc_events_total counter
etcd_debugging_mvcc_events_total 0
# HELP etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds Bucketed histogram of index compaction pause duration.
# TYPE etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds histogram
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="0.5"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="1"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="2"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="4"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="8"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="16"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="32"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="64"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="128"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="256"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="512"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="1024"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_bucket{le="+Inf"} 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_sum 0
etcd_debugging_mvcc_index_compaction_pause_duration_milliseconds_count 0
# HELP etcd_debugging_mvcc_keys_total Total number of keys.
# TYPE etcd_debugging_mvcc_keys_total gauge
etcd_debugging_mvcc_keys_total 0
# HELP etcd_debugging_mvcc_pending_events_total Total number of pending events to be sent.
# TYPE etcd_debugging_mvcc_pending_events_total gauge
etcd_debugging_mvcc_pending_events_total 0
# HELP etcd_debugging_mvcc_put_total Total number of puts seen by this member.
# TYPE etcd_debugging_mvcc_put_total counter
etcd_debugging_mvcc_put_total 0
# HELP etcd_debugging_mvcc_range_total Total number of ranges seen by this member.
# TYPE etcd_debugging_mvcc_range_total counter
etcd_debugging_mvcc_range_total 0
# HELP etcd_debugging_mvcc_slow_watcher_total Total number of unsynced slow watchers.
# TYPE etcd_debugging_mvcc_slow_watcher_total gauge
etcd_debugging_mvcc_slow_watcher_total 0
# HELP etcd_debugging_mvcc_txn_total Total number of txns seen by this member.
# TYPE etcd_debugging_mvcc_txn_total counter
etcd_debugging_mvcc_txn_total 0
# HELP etcd_debugging_mvcc_watch_stream_total Total number of watch streams.
# TYPE etcd_debugging_mvcc_watch_stream_total gauge
etcd_debugging_mvcc_watch_stream_total 0
# HELP etcd_debugging_mvcc_watcher_total Total number of watchers.
# TYPE etcd_debugging_mvcc_watcher_total gauge
etcd_debugging_mvcc_watcher_total 0
# HELP etcd_debugging_server_lease_expired_total The total number of expired leases.
# TYPE etcd_debugging_server_lease_expired_total counter
etcd_debugging_server_lease_expired_total 0
# HELP etcd_debugging_snap_save_marshalling_duration_seconds The marshalling cost distributions of save called by snapshot.
# TYPE etcd_debugging_snap_save_marshalling_duration_seconds histogram
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.001"} 14
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.002"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.004"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.008"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.016"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.032"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.064"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.128"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.256"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="0.512"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="1.024"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="2.048"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="4.096"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="8.192"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_bucket{le="+Inf"} 23
etcd_debugging_snap_save_marshalling_duration_seconds_sum 0.021160555
etcd_debugging_snap_save_marshalling_duration_seconds_count 23
# HELP etcd_debugging_snap_save_total_duration_seconds The total latency distributions of save called by snapshot.
# TYPE etcd_debugging_snap_save_total_duration_seconds histogram
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.001"} 2
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.002"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.004"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.008"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.016"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.032"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.064"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.128"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.256"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="0.512"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="1.024"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="2.048"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="4.096"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="8.192"} 23
etcd_debugging_snap_save_total_duration_seconds_bucket{le="+Inf"} 23
etcd_debugging_snap_save_total_duration_seconds_sum 0.028607003
etcd_debugging_snap_save_total_duration_seconds_count 23
# HELP etcd_debugging_store_expires_total Total number of expired keys.
# TYPE etcd_debugging_store_expires_total counter
etcd_debugging_store_expires_total 0
# HELP etcd_debugging_store_reads_total Total number of reads action by (get/getRecursive), local to this member.
# TYPE etcd_debugging_store_reads_total counter
etcd_debugging_store_reads_total{action="get"} 166070
etcd_debugging_store_reads_total{action="getRecursive"} 2
# HELP etcd_debugging_store_watch_requests_total Total number of incoming watch requests (new or reestablished).
# TYPE etcd_debugging_store_watch_requests_total counter
etcd_debugging_store_watch_requests_total 0
# HELP etcd_debugging_store_watchers Count of currently active watchers.
# TYPE etcd_debugging_store_watchers gauge
etcd_debugging_store_watchers 0
# HELP etcd_debugging_store_writes_total Total number of writes (e.g. set/compareAndDelete) seen by this member.
# TYPE etcd_debugging_store_writes_total counter
etcd_debugging_store_writes_total{action="set"} 56676
# HELP etcd_disk_backend_commit_duration_seconds The latency distributions of commit called by backend.
# TYPE etcd_disk_backend_commit_duration_seconds histogram
etcd_disk_backend_commit_duration_seconds_bucket{le="0.001"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="0.002"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="0.004"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="0.008"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="0.016"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="0.032"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="0.064"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="0.128"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="0.256"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="0.512"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="1.024"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="2.048"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="4.096"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="8.192"} 31
etcd_disk_backend_commit_duration_seconds_bucket{le="+Inf"} 31
etcd_disk_backend_commit_duration_seconds_sum 0.00376729
etcd_disk_backend_commit_duration_seconds_count 31
# HELP etcd_disk_backend_snapshot_duration_seconds The latency distribution of backend snapshots.
# TYPE etcd_disk_backend_snapshot_duration_seconds histogram
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.01"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.02"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.04"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.08"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.16"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.32"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="0.64"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="1.28"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="2.56"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="5.12"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="10.24"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="20.48"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="40.96"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="81.92"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="163.84"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="327.68"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="655.36"} 0
etcd_disk_backend_snapshot_duration_seconds_bucket{le="+Inf"} 0
etcd_disk_backend_snapshot_duration_seconds_sum 0
etcd_disk_backend_snapshot_duration_seconds_count 0
# HELP etcd_disk_wal_fsync_duration_seconds The latency distributions of fsync called by wal.
# TYPE etcd_disk_wal_fsync_duration_seconds histogram
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.001"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.002"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.004"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.008"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.016"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.032"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.064"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.128"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.256"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="0.512"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="1.024"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="2.048"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="4.096"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="8.192"} 110807
etcd_disk_wal_fsync_duration_seconds_bucket{le="+Inf"} 110807
etcd_disk_wal_fsync_duration_seconds_sum 0.35926996800000155
etcd_disk_wal_fsync_duration_seconds_count 110807
# HELP etcd_grpc_proxy_cache_hits_total Total number of cache hits
# TYPE etcd_grpc_proxy_cache_hits_total gauge
etcd_grpc_proxy_cache_hits_total 0
# HELP etcd_grpc_proxy_cache_keys_total Total number of keys/ranges cached
# TYPE etcd_grpc_proxy_cache_keys_total gauge
etcd_grpc_proxy_cache_keys_total 0
# HELP etcd_grpc_proxy_cache_misses_total Total number of cache misses
# TYPE etcd_grpc_proxy_cache_misses_total gauge
etcd_grpc_proxy_cache_misses_total 0
# HELP etcd_grpc_proxy_events_coalescing_total Total number of events coalescing
# TYPE etcd_grpc_proxy_events_coalescing_total counter
etcd_grpc_proxy_events_coalescing_total 0
# HELP etcd_grpc_proxy_watchers_coalescing_total Total number of current watchers coalescing
# TYPE etcd_grpc_proxy_watchers_coalescing_total gauge
etcd_grpc_proxy_watchers_coalescing_total 0
# HELP etcd_http_received_total Counter of requests received into the system (successfully parsed and authd).
# TYPE etcd_http_received_total counter
etcd_http_received_total{method="GET"} 46856
etcd_http_received_total{method="PUT"} 13349
# HELP etcd_http_successful_duration_seconds Bucketed histogram of processing time (s) of successfully handled requests (non-watches), by method (GET/PUT etc.).
# TYPE etcd_http_successful_duration_seconds histogram
etcd_http_successful_duration_seconds_bucket{method="GET",le="0.0005"} 46838
etcd_http_successful_duration_seconds_bucket{method="GET",le="0.001"} 46853
etcd_http_successful_duration_seconds_bucket{method="GET",le="0.002"} 46855
etcd_http_successful_duration_seconds_bucket{method="GET",le="0.004"} 46856
etcd_http_successful_duration_seconds_bucket{method="GET",le="0.008"} 46856
etcd_http_successful_duration_seconds_bucket{method="GET",le="0.016"} 46856
etcd_http_successful_duration_seconds_bucket{method="GET",le="0.032"} 46856
etcd_http_successful_duration_seconds_bucket{method="GET",le="0.064"} 46856
etcd_http_successful_duration_seconds_bucket{method="GET",le="0.128"} 46856
etcd_http_successful_duration_seconds_bucket{method="GET",le="0.256"} 46856
etcd_http_successful_duration_seconds_bucket{method="GET",le="0.512"} 46856
etcd_http_successful_duration_seconds_bucket{method="GET",le="1.024"} 46856
etcd_http_successful_duration_seconds_bucket{method="GET",le="2.048"} 46856
etcd_http_successful_duration_seconds_bucket{method="GET",le="+Inf"} 46856
etcd_http_successful_duration_seconds_sum{method="GET"} 6.582268083999954
etcd_http_successful_duration_seconds_count{method="GET"} 46856
etcd_http_successful_duration_seconds_bucket{method="PUT",le="0.0005"} 0
etcd_http_successful_duration_seconds_bucket{method="PUT",le="0.001"} 0
etcd_http_successful_duration_seconds_bucket{method="PUT",le="0.002"} 0
etcd_http_successful_duration_seconds_bucket{method="PUT",le="0.004"} 11520
etcd_http_successful_duration_seconds_bucket{method="PUT",le="0.008"} 13285
etcd_http_successful_duration_seconds_bucket{method="PUT",le="0.016"} 13342
etcd_http_successful_duration_seconds_bucket{method="PUT",le="0.032"} 13346
etcd_http_successful_duration_seconds_bucket{method="PUT",le="0.064"} 13348
etcd_http_successful_duration_seconds_bucket{method="PUT",le="0.128"} 13349
etcd_http_successful_duration_seconds_bucket{method="PUT",le="0.256"} 13349
etcd_http_successful_duration_seconds_bucket{method="PUT",le="0.512"} 13349
etcd_http_successful_duration_seconds_bucket{method="PUT",le="1.024"} 13349
etcd_http_successful_duration_seconds_bucket{method="PUT",le="2.048"} 13349
etcd_http_successful_duration_seconds_bucket{method="PUT",le="+Inf"} 13349
etcd_http_successful_duration_seconds_sum{method="PUT"} 46.16130998500009
etcd_http_successful_duration_seconds_count{method="PUT"} 13349
# HELP etcd_network_client_grpc_received_bytes_total The total number of bytes received from grpc clients.
# TYPE etcd_network_client_grpc_received_bytes_total counter
etcd_network_client_grpc_received_bytes_total 0
# HELP etcd_network_client_grpc_sent_bytes_total The total number of bytes sent to grpc clients.
# TYPE etcd_network_client_grpc_sent_bytes_total counter
etcd_network_client_grpc_sent_bytes_total 0
# HELP etcd_network_peer_received_bytes_total The total number of bytes received from peers.
# TYPE etcd_network_peer_received_bytes_total counter
etcd_network_peer_received_bytes_total{From="0"} 2.0681248e+07
etcd_network_peer_received_bytes_total{From="a0d8d423aeef2a42"} 9.8604068e+07
# HELP etcd_network_peer_round_trip_time_seconds Round-Trip-Time histogram between peers.
# TYPE etcd_network_peer_round_trip_time_seconds histogram
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.0001"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.0002"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.0004"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.0008"} 1
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.0016"} 1
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.0032"} 5146
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.0064"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.0128"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.0256"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.0512"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.1024"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.2048"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.4096"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="0.8192"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="2e8891b6735817f0",le="+Inf"} 5286
etcd_network_peer_round_trip_time_seconds_sum{To="2e8891b6735817f0"} 13.386258313000003
etcd_network_peer_round_trip_time_seconds_count{To="2e8891b6735817f0"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.0001"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.0002"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.0004"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.0008"} 1
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.0016"} 1
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.0032"} 6671
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.0064"} 6840
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.0128"} 6844
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.0256"} 6846
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.0512"} 6847
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.1024"} 6849
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.2048"} 6849
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.4096"} 6850
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="0.8192"} 6850
etcd_network_peer_round_trip_time_seconds_bucket{To="3403afc2347ac500",le="+Inf"} 6850
etcd_network_peer_round_trip_time_seconds_sum{To="3403afc2347ac500"} 19.83450646899939
etcd_network_peer_round_trip_time_seconds_count{To="3403afc2347ac500"} 6850
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.0001"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.0002"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.0004"} 1
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.0008"} 1
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.0016"} 5816
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.0032"} 6845
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.0064"} 6846
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.0128"} 6847
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.0256"} 6847
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.0512"} 6848
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.1024"} 6849
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.2048"} 6849
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.4096"} 6849
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="0.8192"} 6849
etcd_network_peer_round_trip_time_seconds_bucket{To="9f754b5f4c3c3d3",le="+Inf"} 6849
etcd_network_peer_round_trip_time_seconds_sum{To="9f754b5f4c3c3d3"} 10.590761979000366
etcd_network_peer_round_trip_time_seconds_count{To="9f754b5f4c3c3d3"} 6849
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.0001"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.0002"} 0
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.0004"} 1
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.0008"} 1
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.0016"} 5176
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.0032"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.0064"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.0128"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.0256"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.0512"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.1024"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.2048"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.4096"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="0.8192"} 5286
etcd_network_peer_round_trip_time_seconds_bucket{To="a0d8d423aeef2a42",le="+Inf"} 5286
etcd_network_peer_round_trip_time_seconds_sum{To="a0d8d423aeef2a42"} 6.994645712999991
etcd_network_peer_round_trip_time_seconds_count{To="a0d8d423aeef2a42"} 5286
# HELP etcd_network_peer_sent_bytes_total The total number of bytes sent to peers.
# TYPE etcd_network_peer_sent_bytes_total counter
etcd_network_peer_sent_bytes_total{To="2e8891b6735817f0"} 5.328344e+06
etcd_network_peer_sent_bytes_total{To="3403afc2347ac500"} 5.012252e+06
etcd_network_peer_sent_bytes_total{To="9f754b5f4c3c3d3"} 5.044256e+06
etcd_network_peer_sent_bytes_total{To="a0d8d423aeef2a42"} 9.4030861e+07
# HELP etcd_network_peer_sent_failures_total The total number of send failures from peers.
# TYPE etcd_network_peer_sent_failures_total counter
etcd_network_peer_sent_failures_total{To="3403afc2347ac500"} 2
etcd_network_peer_sent_failures_total{To="9f754b5f4c3c3d3"} 2
# HELP etcd_server_has_leader Whether or not a leader exists. 1 is existence, 0 is not.
# TYPE etcd_server_has_leader gauge
etcd_server_has_leader 1
# HELP etcd_server_leader_changes_seen_total The number of leader changes seen.
# TYPE etcd_server_leader_changes_seen_total counter
etcd_server_leader_changes_seen_total 1
# HELP etcd_server_proposals_applied_total The total number of consensus proposals applied.
# TYPE etcd_server_proposals_applied_total gauge
etcd_server_proposals_applied_total 875838
# HELP etcd_server_proposals_committed_total The total number of consensus proposals committed.
# TYPE etcd_server_proposals_committed_total gauge
etcd_server_proposals_committed_total 875838
# HELP etcd_server_proposals_failed_total The total number of failed proposals seen.
# TYPE etcd_server_proposals_failed_total counter
etcd_server_proposals_failed_total 0
# HELP etcd_server_proposals_pending The current number of pending proposals to commit.
# TYPE etcd_server_proposals_pending gauge
etcd_server_proposals_pending 0
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0.00019151
go_gc_duration_seconds{quantile="0.25"} 0.000343593
go_gc_duration_seconds{quantile="0.5"} 0.000422385
go_gc_duration_seconds{quantile="0.75"} 0.0005438
go_gc_duration_seconds{quantile="1"} 0.013083844
go_gc_duration_seconds_sum 0.875796263
go_gc_duration_seconds_count 1403
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 190
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
go_memstats_alloc_bytes 4.8361288e+07
# HELP go_memstats_alloc_bytes_total Total number of bytes allocated, even if freed.
# TYPE go_memstats_alloc_bytes_total counter
go_memstats_alloc_bytes_total 2.3885115832e+10
# HELP go_memstats_buck_hash_sys_bytes Number of bytes used by the profiling bucket hash table.
# TYPE go_memstats_buck_hash_sys_bytes gauge
go_memstats_buck_hash_sys_bytes 1.615508e+06
# HELP go_memstats_frees_total Total number of frees.
# TYPE go_memstats_frees_total counter
go_memstats_frees_total 1.55181906e+08
# HELP go_memstats_gc_sys_bytes Number of bytes used for garbage collection system metadata.
# TYPE go_memstats_gc_sys_bytes gauge
go_memstats_gc_sys_bytes 2.94912e+06
# HELP go_memstats_heap_alloc_bytes Number of heap bytes allocated and still in use.
# TYPE go_memstats_heap_alloc_bytes gauge
go_memstats_heap_alloc_bytes 4.8361288e+07
# HELP go_memstats_heap_idle_bytes Number of heap bytes waiting to be used.
# TYPE go_memstats_heap_idle_bytes gauge
go_memstats_heap_idle_bytes 2.6001408e+07
# HELP go_memstats_heap_inuse_bytes Number of heap bytes that are in use.
# TYPE go_memstats_heap_inuse_bytes gauge
go_memstats_heap_inuse_bytes 5.0184192e+07
# HELP go_memstats_heap_objects Number of allocated objects.
# TYPE go_memstats_heap_objects gauge
go_memstats_heap_objects 109718
# HELP go_memstats_heap_released_bytes_total Total number of heap bytes released to OS.
# TYPE go_memstats_heap_released_bytes_total counter
go_memstats_heap_released_bytes_total 2.3371776e+07
# HELP go_memstats_heap_sys_bytes Number of heap bytes obtained from system.
# TYPE go_memstats_heap_sys_bytes gauge
go_memstats_heap_sys_bytes 7.61856e+07
# HELP go_memstats_last_gc_time_seconds Number of seconds since 1970 of last garbage collection.
# TYPE go_memstats_last_gc_time_seconds gauge
go_memstats_last_gc_time_seconds 1.5063553154344215e+09
# HELP go_memstats_lookups_total Total number of pointer lookups.
# TYPE go_memstats_lookups_total counter
go_memstats_lookups_total 764189
# HELP go_memstats_mallocs_total Total number of mallocs.
# TYPE go_memstats_mallocs_total counter
go_memstats_mallocs_total 1.55291624e+08
# HELP go_memstats_mcache_inuse_bytes Number of bytes in use by mcache structures.
# TYPE go_memstats_mcache_inuse_bytes gauge
go_memstats_mcache_inuse_bytes 6000
# HELP go_memstats_mcache_sys_bytes Number of bytes used for mcache structures obtained from system.
# TYPE go_memstats_mcache_sys_bytes gauge
go_memstats_mcache_sys_bytes 16384
# HELP go_memstats_mspan_inuse_bytes Number of bytes in use by mspan structures.
# TYPE go_memstats_mspan_inuse_bytes gauge
go_memstats_mspan_inuse_bytes 245328
# HELP go_memstats_mspan_sys_bytes Number of bytes used for mspan structures obtained from system.
# TYPE go_memstats_mspan_sys_bytes gauge
go_memstats_mspan_sys_bytes 507904
# HELP go_memstats_next_gc_bytes Number of heap bytes when next garbage collection will take place.
# TYPE go_memstats_next_gc_bytes gauge
go_memstats_next_gc_bytes 6.5270368e+07
# HELP go_memstats_other_sys_bytes Number of bytes used for other system allocations.
# TYPE go_memstats_other_sys_bytes gauge
go_memstats_other_sys_bytes 1.172068e+06
# HELP go_memstats_stack_inuse_bytes Number of bytes in use by the stack allocator.
# TYPE go_memstats_stack_inuse_bytes gauge
go_memstats_stack_inuse_bytes 2.064384e+06
# HELP go_memstats_stack_sys_bytes Number of bytes obtained from system for stack allocator.
# TYPE go_memstats_stack_sys_bytes gauge
go_memstats_stack_sys_bytes 2.064384e+06
# HELP go_memstats_sys_bytes Number of bytes obtained by system. Sum of all system allocations.
# TYPE go_memstats_sys_bytes gauge
go_memstats_sys_bytes 8.4510968e+07
# HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
# TYPE http_request_duration_microseconds summary
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 2968.81
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 3371.863
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 3983.802
http_request_duration_microseconds_sum{handler="prometheus"} 4.887048029799991e+07
http_request_duration_microseconds_count{handler="prometheus"} 15860
# HELP http_request_size_bytes The HTTP request sizes in bytes.
# TYPE http_request_size_bytes summary
http_request_size_bytes{handler="prometheus",quantile="0.5"} 238
http_request_size_bytes{handler="prometheus",quantile="0.9"} 238
http_request_size_bytes{handler="prometheus",quantile="0.99"} 238
http_request_size_bytes_sum{handler="prometheus"} 3.77433e+06
http_request_size_bytes_count{handler="prometheus"} 15860
# HELP http_requests_total Total number of HTTP requests made.
# TYPE http_requests_total counter
http_requests_total{code="200",handler="prometheus",method="get"} 15860
# HELP http_response_size_bytes The HTTP response sizes in bytes.
# TYPE http_response_size_bytes summary
http_response_size_bytes{handler="prometheus",quantile="0.5"} 3670
http_response_size_bytes{handler="prometheus",quantile="0.9"} 3676
http_response_size_bytes{handler="prometheus",quantile="0.99"} 33104
http_response_size_bytes_sum{handler="prometheus"} 5.6657581e+07
http_response_size_bytes_count{handler="prometheus"} 15860
# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
# TYPE process_cpu_seconds_total counter
process_cpu_seconds_total 3833.2
# HELP process_max_fds Maximum number of open file descriptors.
# TYPE process_max_fds gauge
process_max_fds 1.048576e+06
# HELP process_open_fds Number of open file descriptors.
# TYPE process_open_fds gauge
process_open_fds 53
# HELP process_resident_memory_bytes Resident memory size in bytes.
# TYPE process_resident_memory_bytes gauge
process_resident_memory_bytes 7.3777152e+07
# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
# TYPE process_start_time_seconds gauge
process_start_time_seconds 1.50619684223e+09
# HELP process_virtual_memory_bytes Virtual memory size in bytes.
# TYPE process_virtual_memory_bytes gauge
process_virtual_memory_bytes 1.0839404544e+10


Thanks again, Daniel.



On Thursday, September 21, 2017 at 3:08:42 PM UTC+1, Daniel Hobson wrote:

Xiang Li

unread,
Sep 25, 2017, 12:31:38 PM9/25/17
to Gyu-Ho Lee, dpho...@gmail.com, etcd-dev
On Fri, Sep 22, 2017 at 5:35 PM, Gyu-Ho Lee <gyu...@gmail.com> wrote:
It's in seconds, as far as I know. Metrics shows that fsyncs taking >8secs,

Actually not, most of the fsync took less than 0.001 seconds (etcd_disk_wal_fsync_duration_seconds_bucket{le="0.001"} 599311).

The number indicates the total number of observed values that less than the bucket values.

So `fsync` more than 8 seconds is `bucket(Inf)-bucket(8)` = 0.
 

--
You received this message because you are subscribed to the Google Groups "etcd-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to etcd-dev+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Daniel Hobson

unread,
Sep 25, 2017, 3:44:29 PM9/25/17
to etcd-dev
Ah, so in that case then I've just been misreading/misunderstanding the metrics.

I've just re-read the Prometheus histogram descriptions after your `bucket(Inf)-bucket(8)` = 0. info and I think it's starting to make sense now and this explains why I've not been seeing any errors since fixing the RAM disk on the node I'd missed.

Thank you to both of you for the information, I'll just keep monitoring the cluster and see how it holds up for now.

Thanks, Daniel.
To unsubscribe from this group and stop receiving emails from it, send an email to etcd-dev+u...@googlegroups.com.
Reply all
Reply to author
Forward
0 new messages