With the partitioning that was added for reports and resource_events if a user sets their report_ttl or resource_events_ttl to less than 1 day gc will try to drop a partition for the same day that incoming commands are creating. This will lead to churn and deadlocks in PG. We should update our documentation for the report_ttl and resource_events_ttl settings to reflect this issue.
Hi! I just saw this on the mailing list. Did I understand it correctly that partitioning for the reports was introduced recently? I scrolled through the changelog and couldn't spot a hint. In which version was it introduced and if I have a huge amount of reports, might an upgrade cause a long downtime due to table migrations? (not sure if existing data will be split into partitions as well). Maybe infos to those questions can be added to the docs as well