Prometheus 2.0 - Memory spikes

639 views
Skip to first unread message

hari.b...@gmail.com

unread,
Nov 30, 2017, 5:20:49 PM11/30/17
to Prometheus Users
Couple of days ago we have upgraded our production env to prometheus 2.0, all seem to be working fine functionally. Today at one point the memory usage went up very high almost reached 100% of total memory and came back to normal < 5% during time period of 20/30 mins . During that time below error messages are appeared in prometheus logs:

Nov 30 16:26:17 monitoring-1001 prometheus[3383]: level=error ts=2017-11-30T16:26:17.113795274Z caller=engine.go:532 component="query engine" msg="error expanding series set" err="error sending request: context canceled"

Nov 30 16:26:17 monitoring-1001 prometheus[3383]: level=error ts=2017-11-30T16:26:17.135144879Z caller=engine.go:532 component="query engine" msg="error expanding series set" err="error sending request: context canceled"

Nov 30 16:26:17 monitoring-1001 prometheus[3383]: level=error ts=2017-11-30T16:26:17.159175726Z caller=engine.go:532 component="query engine" msg="error expanding series set" err="error sending request: context canceled"

Nov 30 16:27:29 monitoring-1001 prometheus[3383]: level=error ts=2017-11-30T16:27:29.169658566Z caller=engine.go:532 component="query engine" msg="error expanding series set" err="error reading response: context deadline exceeded"

Nov 30 16:27:29 monitoring-1001 prometheus[3383]: level=error ts=2017-11-30T16:27:29.170752087Z caller=engine.go:532 component="query engine" msg="error expanding series set" err="error reading response: context canceled"

Nov 30 16:27:58 monitoring-1001 prometheus[3383]: level=error ts=2017-11-30T16:27:58.808035136Z caller=engine.go:532 component="query engine" msg="error expanding series set" err="error reading response: context canceled"

Nov 30 16:28:02 monitoring-1001 prometheus[3383]: level=error ts=2017-11-30T16:28:02.208598646Z caller=engine.go:532 component="query engine" msg="error expanding series set" err="error sending request: context canceled"

Nov 30 16:28:02 monitoring-1001 prometheus[3383]: level=error ts=2017-11-30T16:28:02.459845184Z caller=engine.go:532 component="query engine" msg="error expanding series set" err="error sending request: context canceled"

Nov 30 16:55:12 monitoring-1001 prometheus[3383]: level=error ts=2017-11-30T16:55:12.77588527Z caller=engine.go:532 component="query engine" msg="error expanding series set" err="error sending request: context canceled"

Nov 30 16:55:12 monitoring-1001 prometheus[3383]: level=error ts=2017-11-30T16:55:12.775973951Z caller=engine.go:532 component="query engine" msg="error expanding series set" err="error sending request: context canceled"


any help in this regard appreciated.


Brian Brazil

unread,
Nov 30, 2017, 6:33:54 PM11/30/17
to hari.b...@gmail.com, Prometheus Users
This was most likely a very expensive query that hit the timeout, likely just before it would have caused an OOM.
 
--

hari Bodagala

unread,
Nov 30, 2017, 7:20:13 PM11/30/17
to Brian Brazil, Prometheus Users
Thanks for your quick response Brian. How can we track down what query caused this issue? Does Prometheus logs that query info anywhere? I appreciate if we need to enable any debug flags to log such info.

Brian Brazil

unread,
Nov 30, 2017, 7:32:09 PM11/30/17
to hari Bodagala, Prometheus Users
On 1 December 2017 at 00:20, hari Bodagala <hari.b...@gmail.com> wrote:
Thanks for your quick response Brian. How can we track down what query caused this issue? Does Prometheus logs that query info anywhere? I appreciate if we need to enable any debug flags to log such info.

There's no such logs yet, however the next release will hopefully have some things to help track down expensive recording rules.

Brian



--
Reply all
Reply to author
Forward
0 new messages