mysqldump 128 GB limit ?

27 views
Skip to first unread message

Kai Zimmer

unread,
Jun 11, 2020, 8:29:30 AM6/11/20
to bareos-users
Hi,

in former times i used bareos with a mysql database backend. However it
became too slow and i switched to a secondary postgres catalogue. I need
to keep the mysql database as a history though.

Now i'm switching from Ubuntu 16.04 (mysql 5.7) to Ubuntu 20.04 (mysql
8.0) and i'm unable to start the mysqld server because of incompatible
data structures. I tried dumping the database on another Ubuntu 16.04
machine, but the SQL-dump file is only about 128 gb in size, although
the binary index files are > 200 GB in size.

Is there a known limit in mysqldump? I'm using ext4 file system which is
capable of 16 Tb single file sizes.

Best,

Kai

Andreas Haase

unread,
Jun 11, 2020, 8:41:48 AM6/11/20
to Kai Zimmer, bareos-users
Hello,


Am 11.06.2020 um 14:29 schrieb Kai Zimmer <zim...@bbaw.de>:

Now i'm switching from Ubuntu 16.04 (mysql 5.7) to Ubuntu 20.04 (mysql 8.0) and i'm unable to start the mysqld server because of incompatible data structures. I tried dumping the database on another Ubuntu 16.04 machine, but the SQL-dump file is only about 128 gb in size, although the binary index files are > 200 GB in size.

Did you get any error message or return code indicating the dump wasn’t successful? Size of files and indices on disk are only a rough indication of the size, the dump will have. In case you did not get any error, try to insert the dump into the new dbms and compare resulting database to the old one.

Regards,
Andreas

Spadajspadaj

unread,
Jun 11, 2020, 10:02:07 AM6/11/20
to bareos...@googlegroups.com
On 11.06.2020 14:29, Kai Zimmer wrote:
> Hi,
>
> in former times i used bareos with a mysql database backend. However
> it became too slow and i switched to a secondary postgres catalogue. I
> need to keep the mysql database as a history though.
>
> Now i'm switching from Ubuntu 16.04 (mysql 5.7) to Ubuntu 20.04 (mysql
> 8.0) and i'm unable to start the mysqld server because of incompatible
> data structures. I tried dumping the database on another Ubuntu 16.04
> machine, but the SQL-dump file is only about 128 gb in size, although
> the binary index files are > 200 GB in size.
>
The size of database files is not directly related to a dump size.
Firstly, the database files can contain spaces from which data has
already been deleted but which was not reused yet. Secondly, remember
that database files contain not only raw data (which you get dumped into
a dump file) but also index structures. The more indices you have
created in the database, the more extra space is used. So I wouldn't be
surprised if the dump was indeed performed properly.

If you have doubts hovewer I'd advise you to try to restore the database
onto another server and check whether select count(*) from each table is
the same as on your source server.

Jörg Steffens

unread,
Jun 12, 2020, 8:16:02 AM6/12/20
to bareos...@googlegroups.com
On 11.06.20 at 14:29 wrote Kai Zimmer:
While this does not answer your question, you may want to take a look at
bareos-dbcopy (since Baroeos >= 19). It is normally used to migrate a
MySQL Bareos catalog to PostgreSQL:

https://docs.bareos.org/master/Appendix/Howtos.html#section-migrationmysqltopostgresql

Regards,
Jörg

--
Jörg Steffens joerg.s...@bareos.com
Bareos GmbH & Co. KG Phone: +49 221 630693-91
http://www.bareos.com Fax: +49 221 630693-10

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
Komplementär: Bareos Verwaltungs-GmbH
Geschäftsführer:
S. Dühr, M. Außendorf, Jörg Steffens, P. Storz

Oleg Volkov

unread,
Jun 13, 2020, 4:49:21 AM6/13/20
to bareos-users
While I am using bareos with postgres database, my hint will suit for mysql too.
I've slightly modified /usr/lib/bareos/scripts/make_catalog_backup.pl script to pipe dump to gzip, like:
exec("HOME='$wd' pg_dump -c | gzip > '$wd/$args{db_name}.sql.gz'");
The corresponding Catalog.conf was modified to backup *.gz version.
Catalog backup time and required disk size dramatically reduced.

If you concern about your index size, it is healthy to any database to rebuild indexes periodically.
You can find index create statement in /usr/lib/bareos/scripts/ddl/creates/ sql scripts
Reply all
Reply to author
Forward
0 new messages