xbstream -x leaves files compressed with .qp

2,515 views
Skip to first unread message

unit0x03

unread,
May 11, 2012, 3:24:03 PM5/11/12
to percona-d...@googlegroups.com
Hi guys,

Just finally getting to test percona-xtrabackup 2.0, and I've run into
a weird problem. The dump using --stream=xbstream works fine, but when
I go to decompress it using "xbstream -x < backup.xbstream", it
unpacks all the files, but leaves them .qp compressed. Attempts to
prepare the backup then fail:


# innobackupex --apply-log ./
[...]
xtrabackup: cd to /srv/mysql/data
xtrabackup: Error: cannot open ./xtrabackup_checkpoints
xtrabackup: error: xtrabackup_read_metadata()
xtrabackup: This target seems not to have correct metadata...
120511 19:21:38 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
xtrabackup: Warning: cannot open ./xtrabackup_logfile. will try to find.
120511 19:21:38 InnoDB: Operating system error number 2 in a file operation.
InnoDB: The error means the system cannot find the path specified.
xtrabackup: Fatal error: cannot find ./xtrabackup_logfile.
xtrabackup: Error: xtrabackup_init_temp_log() failed.
innobackupex: Error:
innobackupex: ibbackup failed at /usr/bin/innobackupex line 371.

# ls
xtrabackup_binary
xtrabackup_binlog_info
xtrabackup_checkpoints.qp
xtrabackup_logfile.qp

A look at the data files show that all idb files are similarly still
compressed as .qp. Is there an additional step that's needed to run
through and decompress everything? I can't see anything listed in the
innobackupex docs:
http://www.percona.com/doc/percona-xtrabackup/innobackupex/streaming_backups_innobackupex.html

I'm using percona-xtrabackup-2.0.0-417.rhel5.

Thanks,
Graeme Humphries

Simon Kuhn

unread,
May 11, 2012, 4:37:01 PM5/11/12
to percona-d...@googlegroups.com
It would be lovely if xbstream would decompress as it extracts (ala tar), but the current version doesn't seem to support it.

I think the current best alternative is to spawn a separate process on the receiving host that will wait on the qpress compressed files (using inotify) and uncompress them as they are finished writing. I wrote something that does this which is attached. It just naively forks off qpress processes as files are finished -- for my use case, this is fine. It might benefit from limiting the number of concurrent decompressions though.

I use it like so:

Receiver:
qpress-waiter.sh /mnt/data

Sender:

innobackupex --user=xx --password=xx --compress --compress-threads=8 --parallel=8 \
--slave-info --safe-slave-backup --stream=xbstream --tmpdir=/mnt/data/tmp ./ \
| ssh -c arcfour blah@host xbstream -x -C /mnt/data

(for my particular case, I didn't find much of a speedup using netcat, but you could easily incorporate that). I also had to apply the diff from [1] to get xbstream to work reliably.

Simon

[1] https://bugs.launchpad.net/percona-xtrabackup/+bug/977995

qpress-waiter.sh

unit0x03

unread,
May 11, 2012, 4:57:00 PM5/11/12
to percona-d...@googlegroups.com
On Fri, May 11, 2012 at 1:37 PM, Simon Kuhn <si...@zombe.es> wrote:
> It would be lovely if xbstream would decompress as it extracts (ala tar), but the current version doesn't seem to support it.

Hmm, that is unfortunate, since it's non-obvious from the docs how to
decompress this non-standard compression format once you've got it
on-disk. Does anyone have a simple guide that takes a compressed
xbstream backup and turns it into a running DB, complete with
decompressing the .qp data files after extraction?

Graeme

Tim Chadwick

unread,
Jul 11, 2012, 4:46:57 PM7/11/12
to percona-d...@googlegroups.com

I have come across this problem as well, any news?  Many thanks in advance.

~tim


--
You received this message because you are subscribed to the Google Groups "Percona Discussion" group.
To post to this group, send email to percona-d...@googlegroups.com.
To unsubscribe from this group, send email to percona-discuss...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/percona-discussion?hl=en.


Raghavendra D Prabhu

unread,
Jul 12, 2012, 12:27:41 PM7/12/12
to percona-d...@googlegroups.com

Hi,
Xbstream is an archival format like tar, it does no compression
of its own, which is why qpress is used.

I have used this to decompress after xbstream before:
(it will also decompress in parallel)

find $path-to-prepare-dir -type f -name '*.qp' -printf '%p\n%h\n' | xargs -P $(grep -c processor /proc/cpuinfo) -n 2 qpress -d
innobackupex --copy-back

The copy-back won't coy back the *.qp files to datadir of mysql.





Regards,
--
Raghavendra D Prabhu (TZ: GMT + 530)
Call: +91 96118 00062
mailto:raghavend...@percona.com
Percona, Inc. - http://www.percona.com / Blog: http://www.mysqlperformanceblog.com/
Skype: percona.raghavendrap
GPG: 0xD72BE977

Stewart Smith

unread,
Aug 6, 2012, 10:36:10 PM8/6/12
to Vojtech Kurka, percona-d...@googlegroups.com
Vojtech Kurka <vojtec...@gmail.com> writes:
>> Xbstream is an archival format like tar, it does no compression
>> of its own, which is why qpress is used.
>
>
> Raghavendra <raghavend...@percona.com>,
>
> tar can decompress the the whole archive, if you tell it so
> (--use-compress-program=PROG). It would be nice if xbstream could
> decompress the individual *.qp files.
>
> Currently, when you need a full restore (i.e. to load new slave server),
> you must do too much i/o.
> In example: I have 1TB data in many tablespaces, the compressed backup is
> about 500GB. The recovery:
> 1.) unpack the xbstream archive => 500GB of write i/o, 500GB of read i/o
> 2.) decompress the individual .qp files => write 1TB of i/o, 500GB of read
> i/o
> 3.) apply log
> 4.) --copy-back => 1TB of write i/o, 1TB of read i/o
>
> Steps 1.) and 4.) should be omitted, because they make the restore two
> times longer than it should be. The --move-back option is implemented in
> launchpad, but the patch is still not approved I think.
>
> Yes, everyone can make some workarounds as mentioned earlier in this
> thread, but internal xtrabackup implementation would be nice. Or is it
> already possible and I'm missing something?

We're planning to make things much easier in future versions (but, as
always, patches are welcome!). What's good to see is that people are
using compressed backups and the biggest problem is nicer and faster
ways of restoring :)

--
Stewart Smith
Reply all
Reply to author
Forward
0 new messages