I had posted some suspect on the implementation and performance of Data
Protector 6.0's enhanced incremental backup.
In the meantime Data Protector was unable to backup the /var directory of some
non-Cell-Server client, because backup was terribly slow. At the moment there
ere about 1.5GB to save, and after about 30 minutes the completion was at
about "9%". So I had a look at /var:
/var is mirrored on two SCSI LVD 15kRPM local disks, so it shouldn't be that
slow. STM found no problems with the disk hardware either. The mirror is OK
also.
So some statistics:
/var/opt/omni/enhincrdb (and its subdirectories) contains about 229 thousand
files, making 1.76GB!
So it probably means that when marketing talks about "Enhanced" it actually
means "huge additional resources required".
I still don't know why the "Database" used for backup is located on the
client, and why it isn't a database (like a Berkeley-DB hash) but a collection
of hundred thousand tiny files instead. Except MS-DOS 3.30 FAT filesystem,
there's probably nothing in the world with a poorer performance that that!
So the probably solution is: Disable "enhanced incremental backup" as long as
it's crap like that (probably will be either until DP 7.0, or forever), and
also remove those /var/opt/omni/enhincrdb directories wherever you find them
to restore performance of your system.
BTW: A similar thing is true for Linux (lot of tiny files are created), but
the OS can deal much better with that as HP-UX 11.11 it seems (using
ReiserFS).
Regards,
Ulrich
<snip>
>
> So the probably solution is: Disable "enhanced incremental backup" as long as
>
<snip>
Out of curiosity: why did you enable it in the first place?
Regards,
Frank.
That's a good question with hindsight, like "Why did I get sooo drunk at the
work Xmas party?", but it seemed like a good idea at the time. :o
I thought the only new "feature" in DP6.0 (from DP5.5) was the "enhanced
incremental backup" stuff? If it is BAD (Broken As Designed) what's the
point of upgrading?
I'm gong to give Ulrich the benefit of the doubt and assume he did this in a
"test environment" to investigate if/how the feature works. I'll admit to
trying it also, and coming to much the same conclusion as Ulrich.
--
Posted via a free Usenet account from http://www.teranews.com
Best regards,
Frank.
Excellent question!: The idea was to _speed up_ backups by avoiding to save
identical files multiple times (e.g. if you backup 15 servers having the same
OS). But it seems the mechanism doesn't work across hosts anyway.
The other reason was to use synthetic backup or object consolidation, but file
library code isn't stable also, so I'm fighting with HP support on the problem
how to delete a file library for about three months. The latest procedure
would be to dump the database into text files, edit those files, and then
rebuild the database from those files. You get the idea what's wrong...
Anyway: When turning _off_ Enhanced Incremental backup on one server, the backup
time went from 1 hour 30 minutes down to 15 minutes.
With very few exceptions: If you want something well-done, do it yourself!
Regards,
Ulrich