Very slow restore on 22.0.2

61 views
Skip to first unread message

Daniele Teneggi

unread,
Jan 26, 2023, 7:22:39 AM1/26/23
to bareos...@googlegroups.com
Hello,
 
following an upgrade of the OS on our Bareos server (Debian 8 -> 11), we upgraded Bareos to version 22.0.2 from 17.x. using the community packages from the site; the upgrade went well and system is running correctly from the old configuration, needing only minor changes to disable the TLS support.
 
Two days ago i had to restore a file from the last full backup of our fileserver, the restore went well but took around 7 hours to complete (for a single file); with the old version the time spent was always around 2-3 minutes average. Our Bareos installation saves data on file volumes, not tapes, and doing some tests we verified that the restore took 7 hours because it started reading the volume from the beginning of the job until it found the file requested; the size of full job is around 5.5 TB in a single Volume and the file was near the end, so Bareos had to do almost a full read of the volume before finding it.
 
Digging a little bit further we found that the old version, during a backup, wrote a sort of "index" in the jobmedia table, so it can seek near the position of the requested file and make the restore a lot faster, the backup from the version 22 has only one row indicating the starting and ending position of the job in the volume:
 
[backup done with the old version of Bareos, full job on a single volume, around 19 GB]:
 
+--------+-------+----------+----------------+---------------------+---------------+
| jobid  | level | jobfiles | jobbytes       | starttime           | volumename    |
+--------+-------+----------+----------------+---------------------+---------------+
| 148366 | F     |   53,780 | 19,403,934,091 | 2023-01-02 04:35:38 | win-infopub-1 |
bareos=# select * from jobmedia where jobid=148366;
 jobmediaid | jobid  | mediaid | firstindex | lastindex | startfile | endfile | startblock |  endblock  | volindex | jobbytes 
------------+--------+---------+------------+-----------+-----------+---------+------------+------------+----------+----------
    8079653 | 148366 |     452 |      46067 |     53780 |         5 |       5 | 2845841927 | 3277427288 |       20 |        0
    8079652 | 148366 |     452 |      30131 |     46067 |         5 |       5 | 1845905960 | 2845841926 |       19 |        0
    8079645 | 148366 |     452 |      14749 |     30131 |         5 |       5 |  845970039 | 1845905959 |       18 |        0
    8079640 | 148366 |     452 |       7721 |     14749 |         4 |       5 | 4141001410 |  845970038 |       17 |        0
    8079636 | 148366 |     452 |       7512 |      7721 |         4 |       4 | 3141065414 | 4141001409 |       16 |        0
    8079634 | 148366 |     452 |       7512 |      7512 |         4 |       4 | 2141129415 | 3141065413 |       15 |        0
    8079633 | 148366 |     452 |       7473 |      7512 |         4 |       4 | 1141193420 | 2141129414 |       14 |        0
    8079631 | 148366 |     452 |       7407 |      7473 |         4 |       4 |  141257426 | 1141193419 |       13 |        0
    8079630 | 148366 |     452 |       7369 |      7407 |         3 |       4 | 3436288733 |  141257425 |       12 |        0
    8079629 | 148366 |     452 |       7328 |      7369 |         3 |       3 | 2436352734 | 3436288732 |       11 |        0
    8079627 | 148366 |     452 |       7298 |      7328 |         3 |       3 | 1436416734 | 2436352733 |       10 |        0
    8079626 | 148366 |     452 |       7274 |      7298 |         3 |       3 |  436480745 | 1436416733 |        9 |        0
    8079624 | 148366 |     452 |       6226 |      7274 |         2 |       3 | 3731512054 |  436480744 |        8 |        0
    8079620 | 148366 |     452 |       6058 |      6226 |         2 |       2 | 2731576066 | 3731512053 |        7 |        0
    8079617 | 148366 |     452 |       3325 |      6058 |         2 |       2 | 1731640078 | 2731576065 |        6 |        0
    8079613 | 148366 |     452 |       3317 |      3325 |         2 |       2 |  731704078 | 1731640077 |        5 |        0
    8079607 | 148366 |     452 |       3317 |      3317 |         1 |       2 | 4026735374 |  731704077 |        4 |        0
    8079598 | 148366 |     452 |       1789 |      3317 |         1 |       1 | 3026799387 | 4026735373 |        3 |        0
    8079590 | 148366 |     452 |        244 |      1789 |         1 |       1 | 2026863387 | 3026799386 |        2 |        0
    8079582 | 148366 |     452 |          1 |       244 |         1 |       1 | 1026927388 | 2026863386 |        1 |        0
 
 
[backup done with version 22, full job on a single volume, around 5,5 TB]:
 
+--------+-------+-----------+-------------------+---------------------+---------------+
| jobid  | level | jobfiles  | jobbytes          | starttime           | volumename    |
+--------+-------+-----------+-------------------+---------------------+---------------+
| 149513 | F     | 2,886,927 | 5,637,588,281,412 | 2023-01-21 00:30:02 | fileserver-12 |
 
bareos=# select * from jobmedia where jobid=149513;
 jobmediaid | jobid  | mediaid | firstindex | lastindex | startfile | endfile | startblock | endblock  | volindex | jobbytes 
------------+--------+---------+------------+-----------+-----------+---------+------------+-----------+----------+----------
8086618 | 149513 | 2146 | 1 | 2886927 | 6 | 1320 | 1489561298 | 465483062 | 1 | 0
 
(the jobs are from 2 different clients because the first job was one of the few done with the old version remaining in the catalog)
 
There's someting that has to be changed in the configuration to make it work as before? I tried to search the documentation but cannot find anything about it.. I tried to activate the checkpoint function but i cannot see any visible difference regarding this behaviour.
 
Thank you 
 
Daniele
 
 

Philipp Storz

unread,
Jan 26, 2023, 12:25:39 PM1/26/23
to Daniele Teneggi, bareos...@googlegroups.com
Hello Daniele,

thank you very much for your problem description.

What exact version have you used for the backup, and is your current version the same or different?

Please specify the full version info, like e.g. 22.0.2~pre10.81f8d3b2b

BTW it is usually a good idea to limit the volume size so that the volumes do not get too big.

Best regards,

Philipp


Am 26.01.23 um 13:22 schrieb Daniele Teneggi:
> --
> You received this message because you are subscribed to the Google Groups "bareos-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to
> bareos-users...@googlegroups.com <mailto:bareos-users...@googlegroups.com>.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/bareos-users/WC20230126122232.490768%40zensistemi.com
> <https://groups.google.com/d/msgid/bareos-users/WC20230126122232.490768%40zensistemi.com?utm_medium=email&utm_source=footer>.

--
Mit freundlichen Grüßen

Philipp Storz philip...@bareos.com
Bareos GmbH & Co. KG Phone: +49 221 63 06 93-92
http://www.bareos.com Fax: +49 221 63 06 93-10

Sitz der Gesellschaft: Köln | Amtsgericht Köln: HRA 29646
Geschäftsführer: Stephan Dühr, M. Außendorf,
J. Steffens, P. Storz

Daniele Teneggi

unread,
Jan 26, 2023, 1:18:33 PM1/26/23
to Philipp Storz, bareos...@googlegroups.com
Hello Philipp,
 
this is the version that did the backups (and is actually running):
 
Version: 22.0.2~pre (02 January 2023) Debian GNU/Linux 11 (bullseye)
 
I see that there's a new version available on your community repo (22.0.2~pre10.81f8d3b2b-15), i can install it tomorrow if this can help resolve our problem.

Relative to the size of the volumes, we are slowly moving toward a more manageable sizing (around 100-500 GB max based on the client data size)
 
Thank you and have a nice day 
 
Daniele 

Philipp Storz

unread,
Jan 27, 2023, 5:10:07 AM1/27/23
to Daniele Teneggi, bareos...@googlegroups.com
Hello Daniele,

Am 26.01.23 um 19:18 schrieb Daniele Teneggi:
> Hello Philipp,
> this is the version that did the backups (and is actually running):
> Version: 22.0.2~pre (02 January 2023) Debian GNU/Linux 11 (bullseye)
> I see that there's a new version available on your community repo (22.0.2~pre10.81f8d3b2b-15), i can
> install it tomorrow if this can help resolve our problem.
Yes, please always use the newest version and check if the problem still exists. Of course you need
to make new backups as the existing ones will not be altered.

Best regards,

Philipp
> <mailto:bareos-users%2Bunsubscribe%40googlegroups.com>
> <mailto:bareos-users...@googlegroups.com
> <mailto:bareos-users%2Bunsubscribe%40googlegroups.com>>.
> <https://groups.google.com/d/msgid/bareos-users/WC20230126122232.490768%40zensistemi.com?utm_medium=email&utm_source=footer <https://groups.google.com/d/msgid/bareos-users/WC20230126122232.490768%40zensistemi.com?utm_medium=email&utm_source=footer>>.
>
> --
> Mit freundlichen Grüßen
>
>   Philipp Storz philip...@bareos.com <mailto:philipp.storz%40bareos.com>
>   Bareos GmbH & Co. KG                      Phone: +49 221 63 06 93-92
> http://www.bareos.com <http://www.bareos.com>                     Fax:   +49 221 63 06 93-10

Daniele Teneggi

unread,
Jan 27, 2023, 11:47:27 AM1/27/23
to Philipp Storz, bareos...@googlegroups.com
 
Hello Philipp,
 
today i have upgrared to the lates available version (22.0.2~pre10.81f8d3b2b); in the weekend full backups are on schedule, Monday i will check the results.
 
Thank you and have a nice day.

Daniele Teneggi

unread,
Jan 30, 2023, 11:09:19 AM1/30/23
to Philipp Storz, bareos...@googlegroups.com
Hello Philipp,
 
upgrade to version 22.0.2~pre10.81f8d3b2b had no effect apparently: Saturday morning full backup still has a single volindex:
 
*list jobid=149923     
+--------+----------------+---------------+---------------------+----------------+------+-------+-----------+-------------------+-----------+
| jobid  | name           | client        | starttime           | duration       | type | level | jobfiles  | jobbytes          | jobstatus |
+--------+----------------+---------------+---------------------+----------------+------+-------+-----------+-------------------+-----------+
| 149923 | fileserver-job | fileserver-fd | 2023-01-28 00:30:02 | 1 day 19:39:16 | B    | F     | 2,890,685 | 5,644,681,717,297 | T         |
+--------+----------------+---------------+---------------------+----------------+------+-------+-----------+-------------------+-----------+

bareos=# select * from jobmedia where jobid=149923;
jobmediaid | jobid  | mediaid | firstindex | lastindex | startfile | endfile | startblock | endblock  | volindex | jobbytes  
------------+--------+---------+------------+-----------+-----------+---------+------------+-----------+----------+----------
   8087093 | 149923 |    2135 |          1 |   2890685 |         1 |    1317 | 3486853479 | 972453106 |        1 |        0

tested a restore, bareos still started reading from the beginning of the job

Do you have any suggestion?

Thank you and have a nice day.

Reply all
Reply to author
Forward
0 new messages