The database container in postgres cluster is stuck in Notready state. The log is
2021-04-06 06:07:51,268 INFO: Lock owner: None; I am pgo-qa-7f64888857-brttb
06/04/2021 11:37:51 2021-04-06 06:07:51,288 INFO: Reaped pid=6934, exit status=0
06/04/2021 11:37:51 2021-04-06 06:07:51,289 INFO: Lock owner: None; I am pgo-qa-7f64888857-brttb
06/04/2021 11:37:51 2021-04-06 06:07:51,290 INFO: starting as a secondary
06/04/2021 11:37:51 2021-04-06 06:07:51.600 UTC [6944] FATAL: could not write lock file "postmaster.pid": No space left on device
06/04/2021 11:37:51 2021-04-06 06:07:51,622 INFO: postmaster pid=6944
06/04/2021 11:37:51 /tmp:5432 - no response
06/04/2021 11:37:51 2021-04-06 06:07:51,647 WARNING: Postgresql is not running.
06/04/2021 11:37:51 2021-04-06 06:07:51,647 INFO: Lock owner: None; I am pgo-qa-7f64888857-brttb
06/04/2021 11:37:51 2021-04-06 06:07:51,663 INFO: Reaped pid=6946, exit status=0
06/04/2021 11:37:51 2021-04-06 06:07:51,665 INFO: pg_controldata:
06/04/2021 11:37:51 pg_control version number: 1201
06/04/2021 11:37:51 Catalog version number: 201909212
06/04/2021 11:37:51 Database system identifier: 6926871385558286504
06/04/2021 11:37:51 Database cluster state: in archive recovery
06/04/2021 11:37:51 pg_control last modified: Mon Apr 5 23:59:08 2021
06/04/2021 11:37:51 Latest checkpoint location: 41/55000060
06/04/2021 11:37:51 Latest checkpoint's REDO location: 41/5404B618
06/04/2021 11:37:51 Latest checkpoint's REDO WAL file: 000000090000004100000054
06/04/2021 11:37:51 Latest checkpoint's TimeLineID: 9
06/04/2021 11:37:51 Latest checkpoint's PrevTimeLineID: 9
06/04/2021 11:37:51 Latest checkpoint's full_page_writes: on
06/04/2021 11:37:51 Latest checkpoint's NextXID: 0:441724
06/04/2021 11:37:51 Latest checkpoint's NextOID: 524294
06/04/2021 11:37:51 Latest checkpoint's NextMultiXactId: 1
06/04/2021 11:37:51 Latest checkpoint's NextMultiOffset: 0
06/04/2021 11:37:51 Latest checkpoint's oldestXID: 480
06/04/2021 11:37:51 Latest checkpoint's oldestXID's DB: 1
06/04/2021 11:37:51 Latest checkpoint's oldestActiveXID: 441724
06/04/2021 11:37:51 Latest checkpoint's oldestMultiXid: 1
06/04/2021 11:37:51 Latest checkpoint's oldestMulti's DB: 1
06/04/2021 11:37:51 Latest checkpoint's oldestCommitTsXid: 0
06/04/2021 11:37:51 Latest checkpoint's newestCommitTsXid: 0
06/04/2021 11:37:51 Time of latest checkpoint: Mon Apr 5 19:13:32 2021
06/04/2021 11:37:51 Fake LSN counter for unlogged rels: 0/3E8
06/04/2021 11:37:51 Minimum recovery ending location: 41/5A000000
06/04/2021 11:37:51 Min recovery ending loc's timeline: 9
06/04/2021 11:37:51 Backup start location: 0/0
06/04/2021 11:37:51 Backup end location: 0/0
06/04/2021 11:37:51 End-of-backup record required: no
06/04/2021 11:37:51 wal_level setting: logical
06/04/2021 11:37:51 wal_log_hints setting: on
06/04/2021 11:37:51 max_connections setting: 100
06/04/2021 11:37:51 max_worker_processes setting: 8
06/04/2021 11:37:51 max_wal_senders setting: 6
06/04/2021 11:37:51 max_prepared_xacts setting: 0
06/04/2021 11:37:51 max_locks_per_xact setting: 64
06/04/2021 11:37:51 track_commit_timestamp setting: off
06/04/2021 11:37:51 Maximum data alignment: 8
06/04/2021 11:37:51 Database block size: 8192
06/04/2021 11:37:51 Blocks per segment of large relation: 131072
06/04/2021 11:37:51 WAL block size: 8192
06/04/2021 11:37:51 Bytes per WAL segment: 16777216
06/04/2021 11:37:51 Maximum length of identifiers: 64
06/04/2021 11:37:51 Maximum columns in an index: 32
06/04/2021 11:37:51 Maximum size of a TOAST chunk: 1996
06/04/2021 11:37:51 Size of a large-object chunk: 2048
06/04/2021 11:37:51 Date/time type storage: 64-bit integers
06/04/2021 11:37:51 Float4 argument passing: by value
06/04/2021 11:37:51 Float8 argument passing: by value
06/04/2021 11:37:51 Data page checksum version: 1
06/04/2021 11:37:51 Mock authentication nonce: 98414342bea502fcf14ce270b2c95c85c768f89151061fac1aa0fdea19a03c9c
The persistent volume was having 1GiB so I increased it to 2GiB. I even resized the pvc to 2GiB but nothing seems to be solving the issue.