We've use the informix online backup ( ontape ) but that does only a backup
of the database !! Restore tested, result : OK, even the rollforward was OK
The second way to do the BU is using NTbackup. Bring down the BaaN and
Informix services, kill all the userprocesses and use NTbackup to make BU of
the whole system.
We tested the restore in a pre-live environment. We deleted a part of the
database and restored the whole database from the tape, result : OK.
Now that we're operational we use a combination : during the week we do a
online BU (informix ontape), in the weekend a complete BU ( NT backup )
Daniel
We are running Baan IVc on an HP9000 server with Oracle 7.3.3. We have used
3 different backup methods. For our live production company, we make 4 ASCII
backups per day of all of the tables. We also make 1 ASCII backup each
evening of company 000. This allows us to restore any specific table at any
given time with relative ease. On our smaller development machine (also
HP9000 server with Oracle 7.3.3), we use the Unix automated backup utility to
back up the whole machine each night. On the production box we use Omniback
to do the same. We have successfully recovered using all 3 methods. So, I
guess the answer to your question depends upon what you want to back up?
Company data? Company 000 (Baan underlying) data? Tablespace structures?
Baan programs and such? All of the above?
Dave Meeks
HTG Corporation
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
http://www.dejanews.com/rg_mkgrp.xp Create Your Own Free Member Forum
You have perhaps four distinct back-up sets:
1) Binaries ($BSE, $ORACLE_HOME, $INFORMIXDIR): These need to be backed up
periodically. This is easy to back up in as much as these are simply files
in the OS on a file system. This means that ANY backup solution will do the
trick (NTBackUp, Backup EXEC, OmniBack, Legato, tar, cpio, etc). In
addition, it is unlikely that you will need more than one tape for this
backup regardless of the tool you use or the tape format (within reason).
All told, $BSE, the RDBMS binaries, and other product (UDMS, Hyperion, etc)
binaries will probably not take up more than 6gb compressed. Nearly every
common tape format today will handle this amount in a single tape.
Another benefit is that it is typically not necessary to stop software to
back these up (NT is an exception to this statement however as NT does not
always allow running process files to be archived). For Baan, you need only
establish a period of time free of development and system administration to
backup $BSE cleanly. Other products (Oracle, Informix, etc...) operate under
basically the same rule.
The period in periodically is a function of how much risk you are willing to
incur. For example, if your shop does a lot of Baan development, you may want
to get a good backup of $BSE more frequently. If you do not do a lot of
development or are willing to regenerate runtime components of development
instead of maintaining a more prescise backup, you will archive $BSE less
frequently.
2) Database data (tables indexes, etc)
This data is usually somewhat trickier to backup. Basically, the problem here
tends to be granularity. For example, if using Oracle with cooked tablespaces
(datafiles i file systems instead of datafiles on raw disks), You can stop
Oracle and backup the datafiles just like you do the binaries. The problem is
that this means to restore a single table, you have to restore the entire
tablespace the table is held in. Often, this is not too desireable.
If you want to get a backup of each table and each index, you have a couple of
choices. You can export each object individually and backup the export file.
This is usually too time consuming to be practical however (particularly when
you think that each Baan company is 2300 tables and maybe 4500 indexes).
The other choice is to use the database tools for this purpose. In
Oracle,this is the Enterprise Backup Utility. This will require a more
expensive license for Oracle than is strictly necessary for Baan but probably
worth the money. For Informix, this is onbar. Unfortunately, onbar seems to
have some problems (particulalry for NT). I am not certain it is reliable
enough yet (though that too is a risk question each site has to decide).
The disadvantage to using the RDBMS tools is that it is another backup
product. In most cases, this means one restore method for the binaries (in
accordance with the tool chosen to back them up) and one restore method for
the table and index data (in accordance with the RDBMS utility). Some
software products (OmniBack and Legato to name two) claim to integrate with
the RDBMS. This is true to a point and seems to work better in UNIX than NT
(we were flat out not able to get OmniBack or Legato to work with NT and
Informix even with on-site consulting from the software vendor themselves).
3) Operating System. This is real similar to the first category except that
this should probably include a bootable quality in some way. By this I mean
to say that the back up of the OS should be taken in such a way as to provide
a method to boot the system should the OS need to be restored. In the case
of AIX, this means taking the backup with SysBack. For HP, using the mkboot
utility to make the backup tape bootable works. For Solaris, we actually
make a bootable CD-ROM with the Veritas and DiskSuite software installed. We
have no standard NT bootable solution yet except to make a bootable floppy
and improvise from there.
4) Transaction Logs. RDMBS' log every transaction which alters database
objects. These logs usually follow some sort of round-robin scheme and cycle
through a definable number of distinct logs. As a result, the transaction
data is lost if the logs cycle around to the first log without having
archived the log. These present a unique back-up set as they need to be
archived as they become full. The logs will fill up on a variable schedule
depending on the transaction rate against the database. This will change as
the project grows and cyclicly during production.
So, you need to decide for these four categories what conditions you want to
restore. This is how you define risk and determine what a good backup
structure is for you. Some questions to ask your self are:
What is the worst-case restore request from the user community your are
willing to consider reasonable. For example, will you have to satisfy a
request from the corporate president to produce a file removed from the
system two years ago. If so, you will need two years of backups including
ever possible file from the system during that time. This will affect your
tape rotation (no tape can be over-written for at least two years) as well as
your tape storage plan (two years worth of tapes is A BUNCH of tapes).
Another good question is will it be ok for you to spend extra time to restore
more than necessary to satisfy the request. For example, if a developer drops
the item master, is it ok to have to restore then entire tablespace the item
master is stored in from the last cold backup.
A corollary to the last question is will you need to bring the database to
most current state of consistency as possible. In the example, you restore
the entire tablespace holding the item master from the last cold backup.
Well, if the last cold backup was yesterday, it may not be necessary to
resote and use transaction logs to roll forward the database. On the other
hand, if you can only get a cold backup once per quarter, you may want to
have the archived transaction logs from the last cold backup to now available
for the roll- forward process.
Finally, what kind of backup window and hardware do you have? You may find
that it takes you 4 hours to store 20gb of data with a DLT4000 tape device.
You may not have four hours per day so you may have to schedule the full
backup on a weekend or in peices during the daily window. This complicates
both transaction log backup as well as tape storage and library plans.
A couple of common suggestions are:
1) Son - Father - Grandfather: Backup your transaction logs daily, your RDBMS
data weekly, and your binaries weekly. Keep three sets of these tapes (7 in
all). One set is the active tape set, one if locally stored (the previous
week's active set). The third set is stored off-site and is the previous
weeks local archive. This means that you can restore your system to any
point in the last two+ weeks (the current week may not be completed at any
one time).
At the end of each week, the set that is slated to become active rotates tapes
(a new one is placed into the Monday slot, the existing tapes for Monday
through Thursday are moved down a day and the Friday tape is thrown out.
The weekly tapes are rotated out quarterly.
2) Finance Period:
Many sites operate 13 finance periods during the year. In this plan, the
archived transaction logs are backed-up daily, the OS and binaries weekly, and
the RDBMS data either weekly or once at the end of each period (depending on
how long you are willing to wait to take a restore of a cold RDBMS data backup
forward through transaction logs).
The last daily, weekly, and periodic back ups are saved offsite. When the new
period starts, the last tape in the daily set is thrown out, the remaining
daily tapes are moved down one position, a new tape is placed into the first
daily position, and all of the weekly/quarterly tapes are replaced. The
offsites are held as long as deemed necessary and then destroyed.
There are many other possible tape rotation schedules. Pick one that
satisfies your restore needs, your risk comfort level, and any other
requirements you need to meet.
To do automated backups, be sure to get a tape device that will backup your
current data in the windows you have available. These devices should either
have enough native capacity to store your system's data (files, RDBMS, etc...)
or should be able to change its own tapes.
Hope this helps,
-dt
In article <6p5a6t$9h4$1...@holly.prod.itd.earthlink.net>,
"Craig Schaffer" <csch...@swest.se-tech.com> wrote:
> What is the best way to setup an automated backup of baan. I would also like
> to know what directories/files should be backed up. Plus has anyone ever
> recovered from the current way they backup so that the recovery process
> works.
> Thanks
> -Craig
>
>
-----== Posted via Deja News, The Leader in Internet Discussion ==-----
A question : making a ASCII backup, is that making a sequential dump of a
company/table ? Does the database have to be off-line ? We did that once,
and it ran for several hours !
If the database is online, does the restore leave you with a consistent
database ?
Daniel Willems
When I talk about making ASCII backups, yes, i am referring to a sequential
dump of company and table. This can be done through Tools or, in my case,
using the bdbpre6.1 command in a cron run. We have not had any difficulty
with these backups and have found no need for the DB to be offline. Of
course, the length of time to back up a given company/table depends upon the
table. This is even more true when restoring. Database consistency is a bit
more complex. This, again all depends on the table in question. If we are
talking about something rather static like most of the tccom and tcmcs
tables, then a restore is normally rather transparent. If we are talking
about financial integration tables, forget it. To restore most anything in
finance, especially tfgld tables, you would have to restore the entire
company or you'll spend more time tracking down inconsistencies than you
would have lost by doing the full restore. Mosty other tables are somewhere
in the middle. I would suggest trying some backups/restores in a development
environment and see what happens. Try backing up something simple (like
tccom000 or tccom737). These should backup quite quickly since they should
really only have 1 record in them (3 or 4 at MOST in tccom000 if you have
been doing a lot of copying between companies). If this takes forever, then
somethings not right. Next try something larger but relatively stable like
the warehouses table (tcmcs003 I think). Next move on to things like the
employee master or production BOMs. These are things that could reasonable
affect the consistency of the database. Finally, test something more volatile
like Purchase Orders (tdpur040). tdpur040 is a good one to test if you are
using the distribution module since each record in it should always have a
corresponding record in tdpur050. By comparing the two tables after a
restore, you can get an idea of what you are missing (keeping in mind that
tdpur041, tdpur051, and tdpur045 will also be affected). Use the Tools
session to do the dumps and the loads. Let me know if you have any more
questions.
Dave Meeks
HTG Corporation
david...@yahoo.com
In article <35b9b...@news2.ibm.net>,
-----== Posted via Deja News, The Leader in Internet Discussion ==-----