best way to Export Cloud SQL database

921 views
Skip to first unread message

chen levi

unread,
Aug 27, 2019, 7:56:17 PM8/27/19
to Google Cloud SQL discuss
Hi

i'm trying to find the best way to export Cloud sql database from one project and import it to another.
i have 2 cloud sql instances on 2 different projects (prod/staging).

in order to sync prod to staging database i'm using mysqldump command to export each needed table from prod DB and then import the output to the staging db.
i'm doing so using the following command:
mysqldump db table--skip-lock-tables --single-transaction | gzip > /tmp/mytable.gz

once finished exporting all tables i'm using mysqldump to import the gz files into the staging db.

the problem is , that sometimes (too often) i'm getting one of two issues:

- i'm getting error on "mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table mytable at row: xxxxxx"
- export just hang and takes way too much time.

i see it happens on big tables, so at first i thought its probably related to net_read_timeout and net_write_timeout and which had default value (30/60) so i've tried to increased them to 900, and even 3600.
the second issue still happens.

also tried to add the --quick option which supposed to be better for large tables as it doesnt load the entire content to memory, and do it row by row. still hangs.

after investigating this issue on the net i've started to think that i'm not using mysqldump the right way.
- seems like skip-lock-tables is useless here (innodb) as the single-transaction will make sure the dump will happen in one transaction. right?
- although using the single-transaction, still it means that during that time of dump, which as i saw sometimes takes alot of time, i'm blocking my production database from operations such as create/alter table etc??
- what is the different of using mysqldump then gcloud sql export sql ? is it better suited for this kind of operation?

thanks a lot
Chen

George (Cloud Platform Support)

unread,
Aug 28, 2019, 5:22:58 PM8/28/19
to Google Cloud SQL discuss
Hello Chen, 

You should refer to the "Best practices for importing and exporting data" documentation page, of value in itself, but also for the link to Exporting data for import into Cloud SQL, with a highly significant sub-chapter on "Exporting without views using mysqldump" sub-chapter. You must use the --databases option to specify an explicit list of databases to import, and this list must not contain the mysql system database.

chen levi

unread,
Aug 29, 2019, 6:48:47 PM8/29/19
to Google Cloud SQL discuss
Hi George,

thanks for your response.
i did read pretty carefully the mentioned documentation but nothing is mentioned there about the actual locking thing.

as mentioned the final command i've used was mysqldump db table --quick --skip-lock-tables --single-transaction | gzip > /tmp/mytable.gz
as you can see we do mention the DB and the specific table we want to export.
i've also used the other flags to make the export faster , but it wasnt clear that the table will be locked afterall (i've checked it with show open tables).

anyway when i've switched now to gcloud sql export the export operation took few minutes and completed successfully for the entire DB so seems like this approach is better for us.
(probabaly cause the export resides on gcloud, while on the mysqldump approach is on remote server).


10x

Diogo Almeida

unread,
Aug 29, 2019, 10:11:57 PM8/29/19
to Google Cloud SQL discuss
Note that the flag “--skip-lock-tables” is not currently supported by Cloud SQL. A feature request was filed and you can follow up on the progress of this request here. The supported flags are listed here.
Reply all
Reply to author
Forward
0 new messages