We are planning to use jbackup command instead of UNIX tar command.
>From the jBASE manual we understood that jbackup and jrestore are best
suited for fast online backup. However we are not finding any
difference between tar and jbackup in terms of performance.
Has anybody got a solution to increase the speed? If yes, please
provide us.
We are using the following command to take a backup
/home/root# find . -print | jbackup -m 10000 -B -E1 -v -s statusfile |
compress > backup.Z
jbackup and tar both takes around 1Hr for 40GB data.
Regards
Chandru
Dave.
GRENDATA COMPUTER SYSTEMS Win 2000 server, Build 2195, Service Pack 4
DAVID GRENFELL jBase is Major 3.4 , Minor 6
, Patch 0304
private email: d.grenfell@(remove)grendata.com URL: www.grendata.com
The "bottleneck" is more likely the compression. If space is available, try
deferring the compression and writing the backup file to file "backup" using
the jbackup -f option.
We write a daily backup (approx 35GB) to a 250GB NFS mount and skip the
compression all together. This allows us to keep the last 7 backups in place
for immediate use when required. The backup of 35GB to NFS takes about 1
hour. Using SAN instead, this drops to about 20 minutes. No compression
involved here of course.
If you must compress on the fly, compare the performance of gzip or bzip to
the generic compress. Forget tar as it has no hash file awareness: jbackup
will produce a usable image of in-use files, tar may not. And jrestore
allows you to restore individual items.
Jim Young
ICGS
I think the word "fast" is relative to other MV products (which are
traditionally slower than tar in carrying out a native MV account-save).
Jim Idle used to have a fastbak utility. I know that he no longer markets
it, but I believe that it may still be available from jBASE International.
40gb per hour is going some I would think! I would doubt you would exceed
the raw speed of tar with any solution, though jBackup would be faster on
backing up large (in terms of modulo) but comparatively empty files because
it doesn't save the empty space (which tar will do).
I don't know about tar, but I know that on a windows platform, windows
backup will sometimes fail if a file is open at the time of doing a backup,
whereas jbackup will still work.
Not sure if any of this actually helps!
Regards
=======================================
Simon Verona
Dealer Management Services Ltd
Stewart House
Centurion Office Park
Julian Way
Sheffield
S9 1GD
Email: si...@dmservices.co.uk
Tel: 0870 080 2300
Fax: 0870 169 6747
-----Original Message-----
From: jB...@googlegroups.com [mailto:jB...@googlegroups.com] On Behalf Of
Dave
----- Original Message -----
From: <j...@cexp.com>
To: <jB...@googlegroups.com>
Sent: Thursday, January 19, 2006 4:50 PM
Subject: RE: jbackup and jrestore performance
>
Multi-process your backup, i.e. run several instances of your jbackup /
tar to backup separate files / folders.
I've seen both implemented in some sites, their backup time reduced
quite a bit.
My depreciated 2 cents.
In fact if compression must be used then for a long transfer like this it may be possible that bzip2 is faster with appropriate tweaks. You should never use compress though as the algorithm is pretty simple (these days) and it is a CPU bottleneck. You should also realize that if the compression ever fails you will lose the entire backup.
I should point out though , that jbackup will at some point be shown to be slower than tar (not by lots but by some), this is because jbackup is a formatted save whereas tar is basically a
Jim - backing up to an NFS mount? Are you mad or is it April 1? ;-) Get SAMBA shares working if it must be to a network drive, but it is better to backup locally and ftp afterwards.
Jim
________________________________
From: jB...@googlegroups.com on behalf of Simon Verona
Sent: Thu 1/19/2006 1:58 PM
To: jB...@googlegroups.com
Subject: RE: jbackup and jrestore performance
________________________________