level=error ts=2021-05-28T09:23:52.326Z caller=main.go:894 err="opening storage failed: lock DB directory: resource temporarily unavailable"

2,638 views
Skip to first unread message

nina guo

unread,
May 28, 2021, 5:27:11 AM5/28/21
to Prometheus Users
Hi,

Got this error when I'm trying to start the Prometheus POD.

The backend storage is NFS.

nina guo

unread,
May 28, 2021, 6:40:17 AM5/28/21
to Prometheus Users
any one can help on this issue?
I tried to deploy prometheus in k8s  cluster with multiple replicas, and then met the issue.

Julien Pivotto

unread,
May 28, 2021, 8:19:54 AM5/28/21
to nina guo, Prometheus Users
As stated here, and in the logs of your Prometheus instance, NFS is
unfortunately not a supported backend storage, because users run into
those kind of situations.

https://prometheus.io/docs/prometheus/latest/storage/


Please also note that each prometheus must have its own directories,
you can't have multiple prometheus write to the same directory.

> >
>
> --
> You received this message because you are subscribed to the Google Groups "Prometheus Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to prometheus-use...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/prometheus-users/34dabae1-7cb3-4af3-b950-47fca914b8fen%40googlegroups.com.


--
Julien Pivotto
@roidelapluie
Message has been deleted

nina guo

unread,
May 30, 2021, 11:31:25 PM5/30/21
to Prometheus Users
May I ask why multiple Prometheus cannot write to the same directory?

Stuart Clark

unread,
Jun 1, 2021, 6:14:45 AM6/1/21
to nina guo, Prometheus Users
On 31/05/2021 04:31, nina guo wrote:
> May I ask why multiple Prometheus cannot write to the same directory?

Different instances of Prometheus are not aware of each other and so
would overwrite & corrupt the files.

Prometheus is also not supported with shared file systems such as NFS
(due to issues with what POSIX requests are supported) and so even with
different directories for each instance it would not be advisable.

--
Stuart Clark

nina guo

unread,
Jun 2, 2021, 1:28:31 AM6/2/21
to Prometheus Users
Thank you very much Stuart.

nina guo

unread,
Jun 2, 2021, 4:22:04 AM6/2/21
to Prometheus Users
If Prometheus deploys in k8s with multiple Pods, the Prometheus Pods are running independently, am I right?

Stuart Clark

unread,
Jun 2, 2021, 4:39:16 AM6/2/21
to nina guo, Prometheus Users
On 02/06/2021 09:22, nina guo wrote:
> If Prometheus deploys in k8s with multiple Pods, the Prometheus Pods
> are running independently, am I right?
That is correct.

--
Stuart Clark

nina guo

unread,
Jun 2, 2021, 6:01:38 AM6/2/21
to Prometheus Users
So the better solution would be mount another storage rather than NFS separately to each Pod.
For example, 2 Prometheus Pods are running with 2 separate volumes, if one of the Pod goes down(but the data is still in memory), according to k8s mechanism, another Pod will be started automatically. Currently the data which was in memeroy will be lost. It will cause data inconsistency. Because the other running Pod probably already have written the data to persistent volume.

nina guo

unread,
Jun 2, 2021, 6:05:41 AM6/2/21
to Prometheus Users
Can we solve this issue with load balancer?

Stuart Clark

unread,
Jun 2, 2021, 6:24:41 AM6/2/21
to nina guo, Prometheus Users
On 02/06/2021 11:01, nina guo wrote:
> So the better solution would be mount another storage rather than NFS
> separately to each Pod.
Yes.
> For example, 2 Prometheus Pods are running with 2 separate volumes, if
> one of the Pod goes down(but the data is still in memory), according
> to k8s mechanism, another Pod will be started automatically. Currently
> the data which was in memeroy will be lost. It will cause data
> inconsistency. Because the other running Pod probably already have
> written the data to persistent volume.
>
The two instances will record different data to each other, as they will
be scraping the common set of targets at slightly different times. The
usual way to handle this (as well as deal with gaps due to
restarts/errors) is to use a "sticky" load balancer or something like
Thanos or promxy.

Anything which is purely in memory and not written to disk when
Prometheus crashes (or is forceable destroyed) will be lost, but
Prometheus tries to write to the WAL regularly to reduce that risk. When
restarted (assuming the same storage is reattached) the WAL is read back
into memory.

--
Stuart Clark

Reply all
Reply to author
Forward
0 new messages