You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to Postgres Operator
Postgres cluster created using PostgresCluster CR becomes unhealthy after worker nodes were drained and restarted. We are seeing following logs in database Instance Pod
2023-07-27 17:01:58,052 INFO: No PostgreSQL configuration items changed, nothing to reload. 2023-07-27 17:01:58,072 WARNING: Postgresql is not running. 2023-07-27 17:01:58,073 INFO: Lock owner: None; I am instrumentationdb-instance1-zttf-0 2023-07-27 17:01:58,077 INFO: pg_controldata: pg_control version number: 1300 Catalog version number: 202007201 Database system identifier: 7209530190925107282 Database cluster state: shut down in recovery pg_control last modified: Thu Jul 27 16:31:44 2023 Latest checkpoint location: 15/2A000028 Latest checkpoint's REDO location: 15/2A000028 Latest checkpoint's REDO WAL file: 0000007D000000150000002A Latest checkpoint's TimeLineID: 125 Latest checkpoint's PrevTimeLineID: 125 Latest checkpoint's full_page_writes: on Latest checkpoint's NextXID: 0:45422 Latest checkpoint's NextOID: 188416 Latest checkpoint's NextMultiXactId: 1 Latest checkpoint's NextMultiOffset: 0 Latest checkpoint's oldestXID: 478 Latest checkpoint's oldestXID's DB: 1 Latest checkpoint's oldestActiveXID: 0 Latest checkpoint's oldestMultiXid: 1 Latest checkpoint's oldestMulti's DB: 1 Latest checkpoint's oldestCommitTsXid: 0 Latest checkpoint's newestCommitTsXid: 0 Time of latest checkpoint: Thu Jul 13 19:18:39 2023 Fake LSN counter for unlogged rels: 0/3E8 Minimum recovery ending location: 15/2A0000A0 Min recovery ending loc's timeline: 125 Backup start location: 0/0 Backup end location: 0/0 End-of-backup record required: no wal_level setting: logical wal_log_hints setting: on max_connections setting: 100 max_worker_processes setting: 8 max_wal_senders setting: 10 max_prepared_xacts setting: 0 max_locks_per_xact setting: 64 track_commit_timestamp setting: off Maximum data alignment: 8 Database block size: 8192 Blocks per segment of large relation: 131072 WAL block size: 8192 Bytes per WAL segment: 16777216 Maximum length of identifiers: 64 Maximum columns in an index: 32 Maximum size of a TOAST chunk: 1996 Size of a large-object chunk: 2048 Date/time type storage: 64-bit integers Float8 argument passing: by value Data page checksum version: 1 Mock authentication nonce: afec4ac2d2d78c649caa0234cf9eaa6be0c85273f288b81884d878cbf295f8d8
2023-07-27 17:01:58,092 INFO: Lock owner: None; I am instrumentationdb-instance1-zttf-0 2023-07-27 17:01:58,252 INFO: starting as a secondary 2023-07-27 17:01:58,481 INFO: postmaster pid=102 /tmp/postgres:5432 - no response 2023-07-27 17:01:58.495 UTC [102] LOG: pgaudit extension initialized 2023-07-27 17:01:58.516 UTC [102] LOG: redirecting log output to logging collector process 2023-07-27 17:01:58.516 UTC [102] HINT: Future log output will appear in directory "log". /tmp/postgres:5432 - accepting connections /tmp/postgres:5432 - accepting connections 2023-07-27 17:01:59,589 INFO: establishing a new patroni connection to the postgres cluster 2023-07-27 17:01:59,667 INFO: My wal position exceeds maximum replication lag 2023-07-27 17:01:59,788 INFO: following a different leader because i am not the healthiest node 2023-07-27 17:02:10,090 INFO: My wal position exceeds maximum replication lag 2023-07-27 17:02:10,099 INFO: following a different leader because i am not the healthiest node 2023-07-27 17:02:20,090 INFO: My wal position exceeds maximum replication lag 2023-07-27 17:02:20,100 INFO: following a different leader because i am not the healthiest node 2023-07-27 17:02:30,090 INFO: My wal position exceeds maximum replication lag