table copy, bulk loader and replication | altering large table column from INT to BIGINT

18 views
Skip to first unread message

ThePerformics

unread,
Oct 14, 2015, 1:17:09 PM10/14/15
to tokudb-user
Hi all,

Does someone know if there's a way to user bulk loader when performing a "load data in file" on a master replication server that could also use bulk loader on slaves? I've got 1 billion lines on some tables and it took almost 24 hours to replicate and caused 30K+ seconds replication delay.

I'm on MariaDB 10.0.17 with tokudb-7.5.6 with Binlog format = ROW

I know I could probably load data on master by setting binloging to off and do the same on slaves, but I want to be 100% sure I won't have any discrepancies.


Another question:

I am doing this because I have some tables with primary auto-increment column set to INT, but it's growing very fast because of many "INSERT ... ON DUPLICATE", and I'm about to hit INT limit, so I want to alter this column to BIGINT, but doing an alter perform a full table copy, which is also not good for replication. Do someone have a better idea than altering the table or load data into fresh table already altered?

Many thanks,
Dan

Phil

unread,
Oct 15, 2015, 10:07:42 PM10/15/15
to tokudb-user
Hi Dan.. I'm no expert but I'll try to assist.

You could do what you suggested and load on each slave.  You could confirm the data is identical afterwards by using pt-table-sync/pt-table-checksum.  It works with TokuDB tables (I've used it), but it's best when you have a unique index on each table.

Regarding your alert table question - I thought alter operations (DDL) are not ROW replicated, but are replicated as statements.  Therefore, you shouldn't be worried about replication that much.  With TokuDB, your alter table for column expansion can be hot as well I think.

Phil
Reply all
Reply to author
Forward
0 new messages