Thank you for your reply.
My wait_timeout was already set to quite high value on server side (initially 8 hours, increased to 16 hours (57600))
However, I tried to set it up at session level as you suggest in the connector config:
"database.initial.statements": "SET SESSION wait_timeout=172800",
But it still fails after about 1 hour of the initial import (when the first "big table" snapshot is over and it starts the second table).
FYI on a test connector where I reproduce the problem:
- table1 has about 140 million rows
- table2 has about 100 rows
=> it always fail at the end of table1, when it starts to snapshot table2
[2024-02-29 09:45:26,549] INFO Exported 126994845 of 137457302 records for table 'ftest.table1' after 01:15:49.762 (io.debezium.relational.RelationalSnapshotChangeEventSource)
...
[2024-02-29 09:45:35,279] INFO Exporting data from table 'ftest.table2' (2 of 3 tables) (io.debezium.relational.RelationalSnapshotChangeEventSource)
[2024-02-29 09:45:35,279] ERROR Error during snapshot (io.debezium.relational.RelationalSnapshotChangeEventSource)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.errors.ConnectException: Snapshotting of table ftest.table1 failed
...
Caused by: org.apache.kafka.connect.errors.ConnectException: Snapshotting of table ftest.table1 failed
...
Caused by: java.sql.SQLException: Error retrieving record: Unexpected Exception: java.net.SocketException message given: Connection timed out (Write failed)
I also tried to raise some other timeout values:
"database.initial.statements": "SET SESSION wait_timeout=172800;SET SESSION interactive_timeout=172800;SET SESSION net_write_timeout=7200;SET SESSION net_read_timeout=7200;SET SESSION lock_wait_timeout=7200;SET SESSION innodb_lock_wait_timeout=7200",
but it did not help either.
Not that I can confirm in the MySQL logs that the settings from this initial.statements are applied as expected.
Other ideas are welcome, thanks.