Setting wal_level

419 views
Skip to first unread message

Tim Bo

unread,
Dec 2, 2021, 8:10:46 AM12/2/21
to Postgres Operator
Hi,

I am using pgo v5 for managing a database. Currently there is only a single master without any replicas.

When importing quite a big backup file (sql file, 55 GB), the size of my wal folder gets very big too (72 GB). Using the zalando postgres operator the wal size does not exceed 5 GB. I have compared the postgres configurations and noticed that the 'wal_level' setting of the pgo managed database is set to 'logical' and the zalando one to 'replica'. After reading the documentation for that setting, I would like to check if it is responsible for the large wal size.

So, I was trying to change the setting from 'logical' to 'replica', but it doesn't get applied (as opposed to settings like max_wal_size). Everytime I restart the database, the wal_size is still set to 'logical'.

postgres=# show wal_level;
 wal_level
-----------
 logical
(1 row)

This is my patroni configuration:
patroni:
dynamicConfiguration:
postgresql:
parameters:
wal_level: replica
max_wal_size: 512

I even initialized a new database with those settings, but it didn't work either.

So, how do I change the wal_level setting?

Thanks,
Tim

Jonathan S. Katz

unread,
Dec 2, 2021, 3:09:11 PM12/2/21
to Tim Bo, Postgres Operator
The "wal_level" setting is not affecting this. This is a matter of WAL retention.

By default, PGO enables backups + archiving as part of its "production ready" architecture, and in part to be safe when dealing with storage in and around Kubernetes. As part of this, each WAL file is shipped to at least one backup repository. If the repository is unavailable, or a lot of archives are generated in a short amount of time and the pushing is not keeping up, WAL files can be retained. (There are other reasons why WAL can be retained as well).

In this case, so long as the system is healthy, the WAL bloat is temporary. Once Postgres knows that there are no external dependents on a WAL file, it will remove it.

What's likely happening here is that your archive pushes are not keeping up with the data being imported. You can always choose to tweak the settings for archive-push if need be.

We may evaluate for a feature release allowing for the auto backup/archiving functionality to be opt-out.

Jonathan

Jonathan S. Katz
VP Platform Engineering

Crunchy Data
Enterprise PostgreSQL 


Reply all
Reply to author
Forward
0 new messages