jbackup and jrestore performance

112 views
Skip to first unread message
Message has been deleted

chandru

unread,
Jan 19, 2006, 2:58:49 PM1/19/06
to jBASE
Hi,

We are planning to use jbackup command instead of UNIX tar command.
>From the jBASE manual we understood that jbackup and jrestore are best
suited for fast online backup. However we are not finding any
difference between tar and jbackup in terms of performance.

Has anybody got a solution to increase the speed? If yes, please
provide us.

We are using the following command to take a backup

/home/root# find . -print | jbackup -m 10000 -B -E1 -v -s statusfile |
compress > backup.Z

jbackup and tar both takes around 1Hr for 40GB data.


Regards
Chandru

David Grenfell

unread,
Jan 19, 2006, 4:20:15 PM1/19/06
to jB...@googlegroups.com
I just did some quick math here, and your system seems to be in the ballpark
of about 50gb per hour. This , as far as I know is pretty standard for tape
backups. Your bottlneck is your drive transfer rate and not the software.
If you have to squeek in a backup , then I suggest that you do your jbackup
to a hard drive file, and then backit up later.

Dave.


GRENDATA COMPUTER SYSTEMS Win 2000 server, Build 2195, Service Pack 4
DAVID GRENFELL jBase is Major 3.4 , Minor 6
, Patch 0304

private email: d.grenfell@(remove)grendata.com URL: www.grendata.com

j...@cexp.com

unread,
Jan 19, 2006, 4:50:08 PM1/19/06
to jB...@googlegroups.com
The jbackup command implies he is already writing to disk rather than tape.

The "bottleneck" is more likely the compression. If space is available, try
deferring the compression and writing the backup file to file "backup" using
the jbackup -f option.

We write a daily backup (approx 35GB) to a 250GB NFS mount and skip the
compression all together. This allows us to keep the last 7 backups in place
for immediate use when required. The backup of 35GB to NFS takes about 1
hour. Using SAN instead, this drops to about 20 minutes. No compression
involved here of course.

If you must compress on the fly, compare the performance of gzip or bzip to
the generic compress. Forget tar as it has no hash file awareness: jbackup
will produce a usable image of in-use files, tar may not. And jrestore
allows you to restore individual items.

Jim Young
ICGS

Jason.Sh...@cexp.com

unread,
Jan 19, 2006, 4:58:03 PM1/19/06
to jB...@googlegroups.com, Jason.Sh...@cexp.com
Also, IMHO, if you're getting record level integrity at the same speed of
tar, consider yourself lucky ;-)

Simon Verona

unread,
Jan 19, 2006, 4:58:48 PM1/19/06
to jB...@googlegroups.com
Its pretty impressive that you see equal performance between tar and jbackup
- seeing that tar just sees the jbase file as a single lump, and jbackup
goes through the file extracting data record by record!

I think the word "fast" is relative to other MV products (which are
traditionally slower than tar in carrying out a native MV account-save).

Jim Idle used to have a fastbak utility. I know that he no longer markets
it, but I believe that it may still be available from jBASE International.

40gb per hour is going some I would think! I would doubt you would exceed
the raw speed of tar with any solution, though jBackup would be faster on
backing up large (in terms of modulo) but comparatively empty files because
it doesn't save the empty space (which tar will do).

I don't know about tar, but I know that on a windows platform, windows
backup will sometimes fail if a file is open at the time of doing a backup,
whereas jbackup will still work.

Not sure if any of this actually helps!

Regards

=======================================
Simon Verona
Dealer Management Services Ltd
Stewart House
Centurion Office Park
Julian Way
Sheffield
S9 1GD

Email: si...@dmservices.co.uk
Tel: 0870 080 2300
Fax: 0870 169 6747

-----Original Message-----
From: jB...@googlegroups.com [mailto:jB...@googlegroups.com] On Behalf Of

Gary Calvin

unread,
Jan 19, 2006, 5:05:26 PM1/19/06
to jB...@googlegroups.com
You might also be able to pick up a bit of performance improvement if the backup file is on a separate physical disk/array from the database. In other words, if your database is on /dev/sdb1, try backing up to a file on /dev/sda1.

-Gary-

David Grenfell

unread,
Jan 19, 2006, 5:27:37 PM1/19/06
to jB...@googlegroups.com
OOPS. Sorry Jim, I should have read the post a little better. But an
asside here. isn't using TAR on unix similar to using NTBACKUP on windows,
and is this not what you straightened me out on a while back, explaining the
virtues of using jbackup to prevent corrupted backups ?

Dave


----- Original Message -----
From: <j...@cexp.com>
To: <jB...@googlegroups.com>
Sent: Thursday, January 19, 2006 4:50 PM
Subject: RE: jbackup and jrestore performance


>

Gerry

unread,
Jan 19, 2006, 10:18:04 PM1/19/06
to jBASE
If you've got a big memory-mapped drive, try creating the backup
file(s) there. I think some people call it a "virtual drive", correct
me if I'm wrong.

Multi-process your backup, i.e. run several instances of your jbackup /
tar to backup separate files / folders.

I've seen both implemented in some sites, their backup time reduced
quite a bit.

My depreciated 2 cents.

Jim Idle

unread,
Jan 22, 2006, 12:24:28 PM1/22/06
to jB...@googlegroups.com

If you must compress on the fly, compare the performance of gzip or bzip to
the generic compress. Forget tar as it has no hash file awareness: jbackup
will produce a usable image of in-use files, tar may not. And jrestore
allows you to restore individual items.

In fact if compression must be used then for a long transfer like this it may be possible that bzip2 is faster with appropriate tweaks. You should never use compress though as the algorithm is pretty simple (these days) and it is a CPU bottleneck. You should also realize that if the compression ever fails you will lose the entire backup.

I should point out though , that jbackup will at some point be shown to be slower than tar (not by lots but by some), this is because jbackup is a formatted save whereas tar is basically a

Jim - backing up to an NFS mount? Are you mad or is it April 1? ;-) Get SAMBA shares working if it must be to a network drive, but it is better to backup locally and ftp afterwards.

Jim

winmail.dat

Jim Idle

unread,
Jan 22, 2006, 12:25:06 PM1/22/06
to jB...@googlegroups.com
In fact fastbak was at least the equal of tar (especially if it is AIX tar which is abysmal performance wise), however as everyone wanted such a thing but everyone wanted it for free...

Jim

________________________________

From: jB...@googlegroups.com on behalf of Simon Verona
Sent: Thu 1/19/2006 1:58 PM
To: jB...@googlegroups.com
Subject: RE: jbackup and jrestore performance

winmail.dat

Jim Idle

unread,
Jan 22, 2006, 12:26:45 PM1/22/06
to jB...@googlegroups.com
You would need one huge machine to store a backup on a memory disk!! However it would still be limited by the read performance of the source disks anyway. Separate controllers and arrays might help, but then, jbackup is an online utility anyway, so it isn't so important to have it work in a finite time as it is to have it not hamper user performance when it is running. So, in some senses, it might be better if it were slower ;-). The compress will consume as much CPU time as it can and the rest is pretty irrelevant though if you use this command

Jim

________________________________

winmail.dat
Reply all
Reply to author
Forward
0 new messages