This is not necessarily a jOOQ issue, but I wonder how you would best solve it when using jOOQ. If it's too off-topic, feel free to direct me somewhere else.
We have been using jOOQ happily for a while in production; every day we have a couple of big bulk insert queries (inserting >200k records at once) that were taking ~2 minutes every time. Some days ago, we configured a parameter in our Postgres server (
idle_in_transaction_session_timeout) that kills all sessions that are in a transaction, but haven't executed any query in 30s. After implementing this configuration, these inserts started to fail.
On closer inspection of a past trace where the timeout is not implemented, these are the logs on our services:
- 23:31:53: Application starts the insert query (with returning clause)
- 23:31:56: SQL is rendered by jOOQ and logged (3s)
- 23:33:53: Postgres says query took 8s (after 2 minutes)
- 23:33:54: Application fetched all results, and transaction is committed in the same second
So, Postgres says the query took ~8s, but the application has been waiting for 2 minutes. We think this is because the SQL we render is very large (it's a query of the form `insert into table (column1, column2) values (1, 2), (3,4)`) and Postgres spends a lot of time parsing it, even though honestly ~2 minutes seems like a lot of time.
This is not a jOOQ issue per-se, but we speculated that splitting the query up into smaller batches would solve the issue. I'm curious what options we have to do such an insert, where we also need to return the IDs of the generated records.