We are using jbackup for backing up data files through the following
commands,
find bnk -print | jbackup -S/home/itops2/stats -f/temenos/backups/
jbkp-17112009 -v 2> /home/itop
s2/Aviion_backup_log
jbackup is taking too long almost 3 hours to backup the entire data
files which is 72 GB in size.
Is there any method or procedure or switch thats speed up the process
and also compress the backup file
Stats:
OS : Hp-UX 11i V2
DB : JBASE 5.0.16
Many Thanks in advance.
Why not just pipe the backup stream through 7zip or bzip2? Bzip2 is pretty good when you can tell it there will be a lot of data for it to compress.
IN terms of speeding up, well that is as fast as jbackup can do it to be honest. There are other ways to backup though. Why not use transaction journaling and backup continuously? If you can bring the system offline or use mirror breaking, then so long as you are assured that there are no writes going on to the database, then you can use raw backups such as disk imaging, tar and so on, which are much faster as they are not formatted backups. Also, jbackup isn't very sophisticated.
Jim
however, to speed up your backup and compress it you can use:
tar -cf - bnk | gzip -1 > filename.tar.gz
if you need to make it even faster then you can do it in parallel,
several processes will do specific portion of the files within your
bnk directory.
jaro
> -----Original Message-----
> From: jb...@googlegroups.com [mailto:jb...@googlegroups.com] On Behalf
> Of jaro
> Sent: Friday, January 22, 2010 10:01 AM
> To: jBASE
> Subject: [SPAM] Re: How to speed up jbackup
>
> I can't believe the backup of 1GB file in 2.5 seconds on linux or any
> other system. possibly in the memory only but usually you can't keep
> the whole database in the memory.
I don't think you quite got what Greg was illustrating. You might also contemplate who wrote the original jbackup.
> however, to speed up your backup and compress it you can use:
> tar -cf - bnk | gzip -1 > filename.tar.gz
I don't think you quite got what I was saying and anyway: tar cvz ... does this if you use GNU tar. But, you can only use tar if the files are offline. If they are online, then your tar backup is useless. Bzip2 is a better compression system for large data streams such as tar, or perhaps 7zip.
> if you need to make it even faster then you can do it in parallel,
> several processes will do specific portion of the files within your
> bnk directory.
Except that at some point you will deflate the read-ahead logic by dancing all over the disks.
Jim
--
Please read the posting guidelines at: http://groups.google.com/group/jBASE/web/Posting%20Guidelines
IMPORTANT: Type T24: at the start of the subject line for questions specific to Globus/T24
To post, send email to jB...@googlegroups.com
To unsubscribe, send email to jBASE-un...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/jBASE?hl=en
If the backup is so crucial for the customer then they can search for
the tools provided with the storage systems itself. Like symmetrix
storages from EMC ot others offers similar tools like mirroring data.
then doing short offline for few seconds and split the mirrored pairs.
after the backup can be performed on that mirrored pair without
affecting the primary system.
> -----Original Message-----
> From: jb...@googlegroups.com [mailto:jb...@googlegroups.com] On Behalf
> Of jaro
> Sent: Friday, February 05, 2010 7:24 AM
> To: jBASE
> Subject: [SPAM] Re: How to speed up jbackup
>
> I don't fully understand your reaction to my posting, Jim.
Clearly :-)
> I think I didn't say anything wrong.
You either didn't read, or didn't understand what Greg was saying. That's all I was pointing out.
> I'm just trying to advise the initiator of the request.
> It was indicated that the database size is about 72GB. I assume it's a
> Temenos t24 system. Then I also assume that the data are stored on the
> storage array. Usually you build the filesystem of the several
> physical discs. I think the customer's database is offline during the
> backup. So I don't see any issue to run the backup in parallel. and
> it's just a matter of a simple script.
No, it isn't. However backups are whatever one believes them to be I suppose?
> If you need to run backup while the system is accessed by users then
> we should forget about jbase, and think about something more serious,
> like Oracle etc.
Sigh. Why don't you try reading that back to yourself? Done a lot of work on the Oracle DBMS source code have you?
Having known people that have written code for Oracle for many years I can assure you that most of it is a pile of dingo's doings held together by bits of string and mediocre programmers. Buy the marketing hype if you like (after all many do), but Oracle does not get you anything better.
Do you know what database Ciridian were using when they gave out 27,000 bank accounts (including mine) last month? http://solutions.oracle.com/partners/ceridian
It's nothing to do with the database itself, it's the dangerous people that think they know what they are doing that are the problem. I imagine many of them go to tea parties and are offended by immaculate confections.
> and forget the tar, gzip, bzip and other commands.
See - you still don't quite understand :-) but don't let that stop you commenting will you?
>
> If the backup is so crucial for the customer
Well I hope it is.
> then they can search for
> the tools provided with the storage systems itself. Like symmetrix
> storages from EMC ot others offers similar tools like mirroring data.
> then doing short offline for few seconds and split the mirrored pairs.
> after the backup can be performed on that mirrored pair without
> affecting the primary system.
My point was that this subject has been done to death many times on this forum and a markmail search will tell you everything you need to know and that Greg posted a lot of useful information in his post but you didn't read it properly so did not see why his comment about the memory to memory transfer rates was relevant. You can reply to me, or you can read his email again. One is more useful to you.
Jim