Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Oracle 8i (8.1.7.0.1) + Redhat Linux 7.2 = Cannot create tablespace file > 2 gb

11 views
Skip to first unread message
Message has been deleted

Knut Talman

unread,
Jun 26, 2002, 12:13:59 PM6/26/02
to
Joe Salmeri wrote:
>
> Here is the server configuration:
>
> Dual P3 Dell Server with 1.5 GB RAM
> Redhat Linux 7.2
> Oracle 8i (8.1.7.0.1)
> Two Oracle databases created
> Oracle TNS Listener configured
>
> Everything appears to be working properly with both database instances,
> however we are unable to create a tablespace file that is larger than 2 GB.
>
> I know that Oracle 8i supports tablespace files that are larger than 2 GB
> because I have setup Oracle 8i installations on Windows 2000 and have
> created > 2gb tablespace files.
>
> From the Redhat website I see that there is a version called Redhat
> Enterprise Edition that says it has largefile support for Oracle 8i, but is
> it possible without getting the Enterprise Edition? If so, what needs to be
> done to provide that support?

There are tons of documentation about large file support out there. Just google
to find it.

Regards,

Knut

Message has been deleted

Knut Talman

unread,
Jun 26, 2002, 2:37:04 PM6/26/02
to
Joe Salmeri wrote:
>
> I don't just blindly post requests for help until I have exhausted all other
> possibilities.
>
> I have looked, I have searched google for DAYS. I've read over 100
> arcticles on Oracle8i and Redhat 7.2. They all address the installation
> problems but nothing about the largefile issues.
>
> I've really only found two references to >2gb tablespace issues. One is for
> SUSE Linux which basically says it isn't supported and the other is for
> Redhat Enterprise Edition which says that it supports it (that is not an
> option for this client).

Can anybody correct me if I'm wrong but AFAIK, Oracle 8i uses glibc 2.1.3. The
LFS interface in glibc 2.1.3 is complete - but the implementation not. The
implementation in 2.1.3 contains also some bugs, e.g. ftello64 is broken. If you
want to use the LFS interface, you need to use a glibc that has been compiled
against headers from a kernel with LFS support in it.

Regards,

Knut

Howard J. Rogers

unread,
Jun 27, 2002, 5:46:20 AM6/27/02
to
Why on Earth anyone would want files bigger than 2Gb, I can't imagine.

Tablespaces should get bigger by having more files, not huge, monolithic
entities that have to be backed up in their entirety, restored in their
entirety, and in which all data is lost until the entire thing is restored
and recovered.

So, no, I can't add to the technical stuff you've provided. I just think the
original requirement is off the deep end in the first place!

Regards
HJR


"Knut Talman" <knut....@mytoys.de> wrote in message
news:3D1A09D0...@mytoys.de...

Message has been deleted

Liggs

unread,
Jun 26, 2002, 6:27:24 PM6/26/02
to
What do you want?
1) files that are larger than 2 Gb (why ?)
2) tablespaces that can hold more than 2 Gb?

if 1 then :-
you need to enable large file support, but lots of things do not
'understand' large files. Its a bit of a vague area.

if 2 then :-
A tablespace can be bigger than 2Gb, if you run up against a 2Gb file
limit, then add another datafile to the tablespace
alter tablespace X add datafile '/f/f/f/f/f.dbf' size 2000m;

It sounds like it would be safer support wise to go with option 2, you end
up with the same capacity (add lots of datafiles) and its
less risky.

Liggs

"Knut Talman" <knut....@mytoys.de> wrote in message

news:3D19E847...@mytoys.de...

Message has been deleted

Sybrand Bakker

unread,
Jun 27, 2002, 5:07:02 PM6/27/02
to
On Thu, 27 Jun 2002 17:28:04 GMT, "Joe Salmeri"
<JoeSa...@comcast.net> wrote:

>I want option 1, files that are larger than 2 gb. I have tables that are
>much larger than 2 gb and I don't want tons of extents. I don't care about
>the things that don't understand large file support because I KNOW that
>Oracle does. I don't understand why this is such a difficult task to
>accomplish on Linux, I have done this on numerous Windows NT and Windows
>2000 servers that I have built over the years and it worked flawlessly.
>
>When you create a bunch of datafiles for a tablespace and that tablespace
>contains many large tables it is very easy to get into the situation where
>you are not be able to create objects because the free space is fragmented
>between all of the datafiles. This is not an easy thing to clean up and
>requires many hours of effort that could have EASILY been avoided.
>
>Extents (initial allocations or next allocations) MUST be contiguous.

Hear, hear.
That is why Oracle invented Locally Managed Tablespaces.
This reduces fragmentation to exactly 0, that is when you allow for an
extra 64k per file, holding the allocation bitmap.
Objection overruled.

Regards


>cannot find free space in one datafile and combine that with free space in a
>second datafile to make up a SINGLE extent. If the extent size is not
>available in a contiguous block then the allocation fails.

Sybrand Bakker, Senior Oracle DBA

To reply remove -verwijderdit from my e-mail address

Message has been deleted

Howard J. Rogers

unread,
Jun 28, 2002, 5:57:30 AM6/28/02
to

"Joe Salmeri" <JoeSa...@comcast.net> wrote in message
news:zZGS8.427991$%y.308...@bin4.nnrp.aus1.giganews.com...
> For the same reasons that you don't want to carve your 100 GB hard drive
up
> into 50 2 GB partitions.
>

Not comparable situations at all.

> If you are working with LARGE tables it is much easier to work with and
> manage.
>
> I don't back up my tablespaces I back up the data they contain.

And how, precisely, do you manage to back up the objects? Unless you are
using export, which doesn't count as a physical backup, then the smallest
unit of backup and restore in an Oracle database, until you get to 9i, is
that data file. Big data files, big backups. Small data files, small backups
extended over a longer backup cycle.

>I don't
> restore tablespaces in their entirety I would restore the objects that I
> need to recover.

You mean via import?! Yup, I can see that this is a perfect recovery
strategy for "LARGE tables". Not.

HJR

>
> "Howard J. Rogers" <d...@hjrdba.com> wrote in message
> news:afemv8$3el$1...@lust.ihug.co.nz...

Howard J. Rogers

unread,
Jun 28, 2002, 6:00:27 AM6/28/02
to

"Sybrand Bakker" <pos...@sybrandb.demon.nl> wrote in message
news:gfvmhusfooj1enmi8...@4ax.com...

> On Thu, 27 Jun 2002 17:28:04 GMT, "Joe Salmeri"
> <JoeSa...@comcast.net> wrote:
>
> >I want option 1, files that are larger than 2 gb. I have tables that are
> >much larger than 2 gb and I don't want tons of extents. I don't care
about
> >the things that don't understand large file support because I KNOW that
> >Oracle does. I don't understand why this is such a difficult task to
> >accomplish on Linux, I have done this on numerous Windows NT and Windows
> >2000 servers that I have built over the years and it worked flawlessly.
> >
> >When you create a bunch of datafiles for a tablespace and that tablespace
> >contains many large tables it is very easy to get into the situation
where
> >you are not be able to create objects because the free space is
fragmented
> >between all of the datafiles. This is not an easy thing to clean up and
> >requires many hours of effort that could have EASILY been avoided.
> >
> >Extents (initial allocations or next allocations) MUST be contiguous. \

Extents can never be contiguous when using file systems. Ever. Not
physically. But that's a side issue.

You are starting from a false premis and compounding it thereafter. A large
number of extents (say 1000 or so) in locally managed tablespace is not an
issue. Therefore, the size of the extents does not have to be humungous.
Therefore, neither do the datafiles.

HJR

Howard J. Rogers

unread,
Jun 28, 2002, 6:07:30 AM6/28/02
to

"Joe Salmeri" <JoeSa...@comcast.net> wrote in message
news:rdNS8.432120$%y.311...@bin4.nnrp.aus1.giganews.com...
> It was my understanding that it would reduce the fragmentation in each of
> the datafiles but that doesn't address the issue where you have 3 data
files
> in a tablespace each with 200mb free and I need to allocate a 600mb
extent.
>
> Are you saying that locally managed tablespaces will allow Oracle to
> allocate part of the extent from one data file and the rest from a
different
> datafile?

No, extents still can't cross the physical file boundary. But you are
mistaken in believing you need a 600Mb extent. What's wrong with 3 200Mb
extents? Contiguity of extents is a complete and utter mirage anyway, and
there is precisely zero performance improvement to be gained or noticed
between a single-extent 600Mb segment, and a 3-200Mb extent segment, or even
a 30-20Mb extent segment.

In dictionary managed tablespace, a 3000-0.2Mb segment would pose problems,
true. Much, much less so in locally managed tablespace. I myself would not
be comfortable with *quite* that many extents, but 100-60Mb extents would
not be a issue.

It's a myth of the biggest proportions that segments should be comprised of
one extent. A single Oracle block is not even contiguous on disk, so there's
no way a single extent can be, either (raw partitions excepted). But that is
not a performance problem.

If you can accept that, then it is follows that it is daft to want enormous
datafiles to accomodate large extents. You are depriving yourself of the
management flexibilty (in backup and recovery scenarios, principally) that
smaller datafiles bring. And you are also missing out on a definite
performance advantage that ensues from multi-file tablespaces -namely,
Oracle's propensity to round-robin the various extents of a segment so that
they are stored on separate files (and hence, potentially, on separate
disks).

HJR


>If so I will investigate using locally managed tablespaces,
> currently we are using dictionary based, we just recently upgraded to
Oracle
> 8i from 8.0.5. From what I have read so far it stil appears that an
extent
> must be contiguious within a single datafile.


>
>
> "Sybrand Bakker" <pos...@sybrandb.demon.nl> wrote in message
> news:gfvmhusfooj1enmi8...@4ax.com...

Knut Talman

unread,
Jun 28, 2002, 6:34:20 AM6/28/02
to
"Howard J. Rogers" wrote:

> You mean via import?! Yup, I can see that this is a perfect recovery
> strategy for "LARGE tables". Not.

Well, there are few situations where one could need files > 2GB. A client once
had a system crash and when the database was up again (lot of recovery problems)
the only way to save the data was using export (DUL was no option). There where
about 60GB of data, version was 8.0.5, so no filesize parameter for export! We
had to export every single table and some tables had to be splitted in two or
three parts. Files > 2GB would have been nice.

Regards,

Knut

Niall Litchfield

unread,
Jun 28, 2002, 6:44:32 AM6/28/02
to
I think Howards point wasn't that you might never want files greater than
2gb , but that you would not want DATAfiles greater than 2gb in size.


--
Niall Litchfield
Oracle DBA
Audit Commission UK
*****************************************
Please include version and platform
and SQL where applicable
It makes life easier and increases the
likelihood of a good answer

******************************************


"Knut Talman" <knut....@mytoys.de> wrote in message

news:3D1C3BAC...@mytoys.de...

Knut Talman

unread,
Jun 28, 2002, 6:56:09 AM6/28/02
to
Niall Litchfield wrote:
>
> I think Howards point wasn't that you might never want files greater than
> 2gb , but that you would not want DATAfiles greater than 2gb in size.

I know, I just wanted to add some real life experience about the 2GB filesize
limit (my datafiles are all 2000M). Even on 64 bit machines I tend to use less
than 2GB. I still can remember the bug in 8.0.5 on Solaris when a file was
larger than 2GB. The block which was at the 2GB position was corrupt. Not always
but often.

Regards,

Knut

Message has been deleted
Message has been deleted

Niall Litchfield

unread,
Jun 28, 2002, 9:26:34 AM6/28/02
to
tell me about it. not the solaris bug but the NT 4gb bug.


--
Niall Litchfield
Oracle DBA
Audit Commission UK
*****************************************
Please include version and platform
and SQL where applicable
It makes life easier and increases the
likelihood of a good answer

******************************************
"Knut Talman" <knut....@mytoys.de> wrote in message

news:3D1C40C9...@mytoys.de...

Knut Talman

unread,
Jun 28, 2002, 9:32:40 AM6/28/02
to
Niall Litchfield wrote:
>
> tell me about it. not the solaris bug but the NT 4gb bug.

Did I say something about the NT 4GB bug?

Knut

Niall Litchfield

unread,
Jun 28, 2002, 9:58:39 AM6/28/02
to
"Joe Salmeri" <JoeSa...@comcast.net> wrote in message
news:BNXS8.424576$Gs.31...@bin5.nnrp.aus1.giganews.com...
> And how is physically backing up 5 2000mb datafiles any different than
> backing up 1 10000mb datafile?
> Either way the same amount of data is backed up.

Its way different if one data file becomes corrupt. Then you have a restore
of 2gb not 10. Not to mention that a say 32gb datafile has 7 chances to
exactly extend to a 4gb boundary

Message has been deleted

Sean M

unread,
Jun 28, 2002, 12:27:41 PM6/28/02
to
"Howard J. Rogers" wrote:
>
> Why on Earth anyone would want files bigger than 2Gb, I can't imagine.

Because, maybe, they have 12 TB databases which would require 6000
datafiles to manage if they were only 2 GB apiece? You can start
hitting OS barries with that many datafiles, like openfiles or, in our
case, nflocks since we're on a NAS.

> Tablespaces should get bigger by having more files, not huge, monolithic
> entities that have to be backed up in their entirety, restored in their
> entirety, and in which all data is lost until the entire thing is restored
> and recovered.

Generally true for smaller databases, agreed. But sometimes you need
the space.

Regards,
Sean M

Howard J. Rogers

unread,
Jun 28, 2002, 3:56:57 PM6/28/02
to
You've rather missed the point, haven't you.

If you have 5 2Gb files, you can back 1 of them up on Monday, one on
Tuesday, one on Wednesday, one on Thursday and one on Friday.

Of 2 of them on Monday, have an evening off on Tuesday, two on Wednesday,
watch the Kumars on Thursday night, and finish the cycle off on Friday.

Or 4 of them on Monday. Have the rest of the week in Marbella. Finish the
job on Friday.

If you have 1 10Gb file, er, you have to back it all up on Monday.

So yes, same amount of data. But infinitely more options with multiple files
than with just one.

HJR


"Joe Salmeri" <JoeSa...@comcast.net> wrote in message

news:BNXS8.424576$Gs.31...@bin5.nnrp.aus1.giganews.com...


> > > For the same reasons that you don't want to carve your 100 GB hard
drive
> > up
> > > into 50 2 GB partitions.
> > >
> >
> > Not comparable situations at all.
>

> In your mind.


>
> >Unless you are using export, which doesn't count as a physical backup,
then
> the
> > smallest unit of backup and restore in an Oracle database, until you get
> to 9i, is
> > that data file. Big data files, big backups. Small data files, small
> backups
> > extended over a longer backup cycle.
>

Howard J. Rogers

unread,
Jun 28, 2002, 4:17:15 PM6/28/02
to

"Joe Salmeri" <JoeSa...@comcast.net> wrote in message
news:QIXS8.25752$Ca2.1...@bin2.nnrp.aus1.giganews.com...

> > No, extents still can't cross the physical file boundary. But you are
> > mistaken in believing you need a 600Mb extent. What's wrong with 3 200Mb
> > extents? Contiguity of extents is a complete and utter mirage anyway,
and
> > there is precisely zero performance improvement to be gained or noticed
> > between a single-extent 600Mb segment, and a 3-200Mb extent segment, or
> even
> > a 30-20Mb extent segment.
>
> Maybe this is true with 8i, maybe this is true with locally managed
> tablespaces, but it DEFINITELY did not used to be true.

Oh yes it did. I chose my figures carefully. Go back to Oracle 7, and you
won't find a performance difference between 30-extent segments, and 1-extent
segments.

>I have seen
> terrible performance problems solved when the table was broken up into a
> large number of extents. And no it had nothing to do with indexes, they
> were rebuilt regularly. The only change in that situation was to
> consolidate the table into large extents. In that particular example
there
> were less than 100 extents and the performance increase was 10x better.
>

I hate to say I don't believe it. But I don't believe it. There's no
physical explanation you can come up with that justifies that sort of
peformance improvement. And Oracle isn't voodoo.

> > In dictionary managed tablespace, a 3000-0.2Mb segment would pose
> problems,
> > true. Much, much less so in locally managed tablespace. I myself would
not
> > be comfortable with *quite* that many extents, but 100-60Mb extents
would
> > not be a issue.
>

> If you had bothered to follow the thread you would have read that
everything
> is in dictionary managed tablespaces because until recently that was the
> only option, locally managed tablespaces were not available.
>

You originally asked about large files. The thread has been about you not
needing large files. You *especially* (but not particularly) don't need them
with locally managed tablespaces. Therefore, a solution to your problem is
to migrate to locally managed tablespaces, and you'll find that you needn't
worry in the slightest about large files ever again.

However, if you'd read carefully, you'd have noticed the point (which you
choose not to believe, but that's up to you) that multiple extents in
dictionary managed tablespace is also not an issue, and hence the issue
similarly disappears whether you choose to use locally managed tablespaces
or not, provided you don't go utterly beserk with the number of extents in
dictionary managed tablespace. If you've a 4K block size, as you should do
on Linux with a file system, then 250ish extents in dictionary managed
tablespace is not, and can not, be a problem.

> >And you are also missing out on a definite
> > performance advantage that ensues from multi-file tablespaces -namely,
> > Oracle's propensity to round-robin the various extents of a segment so
> that
> > they are stored on separate files (and hence, potentially, on separate
> > disks).
>

> That is old school as far as I am concerned. I would never build a
database
> that way. All of my databases use RAID which does a much better job of
> spreading the data across all of my disks than any other option.
>

So what was all that nonsense about demanding 'contiguous extents' then? You
know, the bit where you said "Extents (initial allocations or next
allocations) MUST be contiguous"??

HJR

>


Howard J. Rogers

unread,
Jun 28, 2002, 4:17:31 PM6/28/02
to

"Joe Salmeri" <JoeSa...@comcast.net> wrote in message
news:Fg_S8.173467$_j6.8...@bin3.nnrp.aus1.giganews.com...
> In 7+ years of managing Oracle databases I have only had one incident
where
> the database became corrupt and that was because of a hardware failure,
not
> because of a bug in the operating system or in Oracle. I guess I have
just
> been EXTREMELY lucky.
>
> In the scenerio you describe I would not have a warm comfort level
restoring
> that one datafile. What would happen if there were tables in that bad
> tablespace that had referential integrity to other tables in another
> tablespace?
>

Well, hopefully, with 7+ years of experience behind you, you'll know that
nothing will happen. The magic of applying archives.

I honestly can't believe that for the sake of 1 corrupt block, or one lost
file, you'd want to restore an entire 16Gb tablespace (or worse).

HJR


> Not sure what the 4gb boundary you are referring to is, but I have had NT
> 4.0 filesystems that were 40 gb in size that never experienced any
problems.
>
>
> "Niall Litchfield" <n-litc...@audit-commission.gov.uk> wrote in message
> news:3d1c6b90$0$231$ed9e...@reading.news.pipex.net...

Message has been deleted

Sean M

unread,
Jun 28, 2002, 5:25:13 PM6/28/02
to
Joe Salmeri wrote:
>
> The backup you suggest is worthless to me.

It shouldn't be.

> If I have 5 2gb files associated with my tablespace and I backup the files
> at different times (on different days as you suggest) and a table has
> extents in more than one of the datafiles, the backup is worthless because
> it has lost it's integrity and consistency. Once you put the tablespace in
> backup mode you need to backup ALL of the datafiles associated with it at
> the same time.

No you do not. A valid datafile backup is a valid datafile backup. Hot
or cold, from 3 days ago or 3 weeks, as long as the backup is valid and
you have all necessary archives, it matters not a bit whether other
datafiles in the same tablespace were taken at the same time.

> Consider the following example:
>
> The employee table has a supervisor_id column (FK) which points back to an
> employee_id. The employee table has extents in datafile_1 and datafile_2.
> employee_id 1 has a supervisor of 5287. The row for employee_id 1 is in the
> extent in the datafile_1 and you back it up on Monday. employee_id 5287
> (employee_id 1's supervisor) is in the extent in datafile_2. After Monday's
> backup you change employee_id 1's supervisor to employee_id 6386 and you
> DELETE employee_id 5287 because he no longer works for the company. Tuesday
> night you backup datafile_2.
>
> The backup created is invalid because it does not have an complete and
> accurate copy of the employee table data. Since physical backups do not
> allow you to recover individual database objects you would not be able to
> fix this without manual intervention.

You're missing the point of Oracle backup and recovery. You're assuming
that you'd need to restore datafile_2 from Tuesday night's backup, which
is AFTER the problematic DML. Of course, that won't work. You have to
restore a backup of datafile_2 from BEFORE the DML. But that does *not*
mean you had to have backed up datafile_2 at the same time as
datafile_1. You could have taken the backup of datafile_2 from A WEEK
AGO TUESDAY, and rolled forward to the point just before the problematic
DML.

> If you attempted to restore datafile_1 and datafile_2 you would have the
> following problem (among many others I am sure):
>
> datafile_1 has a row for employee_id 1 with supervisor 5287 (that no longer
> exists) and
> datafile_2 has a row for employee_id 6386 (employee_id 1's NEW supervisor)
> but no employee_id of 5287 (employee_id 1's OLD supervisor)

You're talking about needing point in time recovery, which must apply to
the ENTIRE database. Even if you had taken backups of datafile_1 and
datafile_2 at the same time, you can't simply restore them and move on.
They would have to be make consistent with the rest of the files in the
tablespace. You can't roll only a portion of a database back in time.
It's all or nothing.

As I said already, a good datafile backup is a good datafile backup.
There is no dependency between individual datafile backpus within a
tablespace.

Regards,
Sean M

Message has been deleted

Howard J. Rogers

unread,
Jun 28, 2002, 6:06:51 PM6/28/02
to

"Joe Salmeri" <JoeSa...@comcast.net> wrote in message
news:YN3T8.176384$_j6.9...@bin3.nnrp.aus1.giganews.com...

> The backup you suggest is worthless to me.
>
> If I have 5 2gb files associated with my tablespace and I backup the files
> at different times (on different days as you suggest) and a table has
> extents in more than one of the datafiles, the backup is worthless because
> it has lost it's integrity and consistency. Once you put the tablespace
in
> backup mode you need to backup ALL of the datafiles associated with it at
> the same time.
>

I can't believe that you, with your 7+ years of experience, actually believe
this tosh. Provided you keep all archives from the time the tablespace first
went into backup mode, you can back up what you like, assured that you can
recover anything. This is pretty basic stuff.

> Consider the following example:
>
> The employee table has a supervisor_id column (FK) which points back to an
> employee_id. The employee table has extents in datafile_1 and datafile_2.
> employee_id 1 has a supervisor of 5287. The row for employee_id 1 is in
the
> extent in the datafile_1 and you back it up on Monday. employee_id 5287
> (employee_id 1's supervisor) is in the extent in datafile_2. After
Monday's
> backup you change employee_id 1's supervisor to employee_id 6386 and you
> DELETE employee_id 5287 because he no longer works for the company.
Tuesday
> night you backup datafile_2.
>
> The backup created is invalid because it does not have an complete and
> accurate copy of the employee table data. Since physical backups do not
> allow you to recover individual database objects you would not be able to
> fix this without manual intervention.
>

You are quite correct that you don't use them to recover individual objects.
Only export/import can do that. But the backup is perfectly valid and
useable, and the entire tablespace can be made consistent again, by the
restoration of one or other data file (which ever is the problem) and
applying all necessary redo. Strangely enough, "redo" means re-perform the
original transactions. So you can throw me any DML you like, and it remains
the case that recovery can re-perform it and hence get you a useable
database.

Oracle doesn't *guarantee* no data loss for nothing, you know.

> If you attempted to restore datafile_1 and datafile_2 you would have the
> following problem (among many others I am sure):
>
> datafile_1 has a row for employee_id 1 with supervisor 5287 (that no
longer
> exists) and
> datafile_2 has a row for employee_id 6386 (employee_id 1's NEW supervisor)
> but no employee_id of 5287 (employee_id 1's OLD supervisor)
>

You just don't understand Oracle backup and recovery procedures very well,
do you?

HJR

> "Howard J. Rogers" <d...@hjrdba.com> wrote in message

> news:afif43$ndi$1...@lust.ihug.co.nz...

Message has been deleted

Howard J. Rogers

unread,
Jun 28, 2002, 6:26:18 PM6/28/02
to

"Joe Salmeri" <JoeSa...@comcast.net> wrote in message
news:c05T8.35460$Ca2.2...@bin2.nnrp.aus1.giganews.com...

> > You're talking about needing point in time recovery, which must apply to
> > the ENTIRE database. Even if you had taken backups of datafile_1 and
> > datafile_2 at the same time, you can't simply restore them and move on.
> > They would have to be make consistent with the rest of the files in the
> > tablespace. You can't roll only a portion of a database back in time.
> > It's all or nothing.
>
> I understand that I would not be able to put both datafiles back and just
> move on, I was trying to simplify my example.
>
> How would you ever use a backup of an individual datafile? How would you
> ever use backups of individual datafiles that were not taken at the same
> time?
>

OK, here goes. One tablespace, two backups:

I backup a file on Monday night. It's SCN is 17000. During Tuesday day,
transactions are performed, pushing the SCN onwards. All these transactions
are logged, of course, so the archives produced that day contain SCNs 17001,
17002, 17003 and so on.

I backup a file on Tuesday. It's SCN is 18000. During Wednesday day,
transactions are performed. The archives contain the transactions with SCNs
18001, 18002 and so on.

On Wednesday afternoon, with the database (and hence data file 2) now at SCN
18634, data file 1 blows up. When you restore it from backup, it is at time
17000. That's not consistent with the rest of the database of course, so you
can't work with that tablespace. So you issue the command 'recover datafile
1' (or 'recover tablespace DATA'). That causes Oracle to retrieve all
transactions from the archives, starting with 17001. As it applies whichever
transactions actually affected that datafile, datafile 1 becomes more and
more up to date. Eventually, it stops applying transactions from the
archives, and starts lifting them from the online logs. File 1 is still
getting transactions re-applied to it. When you reach the end of log marker
in the current online redo log, the last possible transaction affecting any
datafile has been applied. Data file 1 is now at SCN 18634. It's consistent
with the rest of the database, and is fully functional. Not a single
committed transaction has been lost.

Had it been file 2 that blew up, the same procedure would have been
followed, with the only difference being that Oracle would only have had to
roll the file forward from SCN 18000 -so less redo would have been needed to
be applied. Recovery would have been quicker, it's true, but otherwise the
process is the same.

Therefore, backups taken at different times, but either one is fully usable,
*provided* every single piece of redo (archives and online) is available
from the time that the *first* file went into backup mode.

> Since it's all or nothing as you put it (and I AGREE) I can not see a
single
> situation where your individual datafile backups could be used.
>

Unfortunately, perhaps, for your argument, there are people all round the
world doing it on a nightly basis. And not losing sleep over it.

It's NOT an all or nothing matter if you are attempting to recover the
database COMPLETELY (Sean carefully made the point that it's all or nothing
for an INCOMPLETE recovery -where, for example, you want the recovery
process above to stop at SCN 18300, and not go further, because the
transaction at 18301 was a stupid one that did damage).

You know, I hope, that in 9i, it's possible to restore individual BLOCKS
from within a datafile? (OK, you have to use RMAN to do the deed, but it's
nevertheless true that you now don't even need to restore a complete
datafile, still less a complete tablespace). And whilst you aren't running
9i, the point is that the application of redo can recover anything, be it a
block, a file, a tablespace or a database. The principle hasn't changed
since way back; only the granularity has.

HJR

> > As I said already, a good datafile backup is a good datafile backup.
> > There is no dependency between individual datafile backpus within a
> > tablespace.
>

> It's a good backup, but how would it ever be used???
>

See above.


>
>


Sean M

unread,
Jun 28, 2002, 6:30:34 PM6/28/02
to
Joe Salmeri wrote:
>
> > You're talking about needing point in time recovery, which must apply to
> > the ENTIRE database. Even if you had taken backups of datafile_1 and
> > datafile_2 at the same time, you can't simply restore them and move on.
> > They would have to be make consistent with the rest of the files in the
> > tablespace. You can't roll only a portion of a database back in time.
> > It's all or nothing.
>
> I understand that I would not be able to put both datafiles back and just
> move on, I was trying to simplify my example.
>
> How would you ever use a backup of an individual datafile? How would you
> ever use backups of individual datafiles that were not taken at the same
> time?
>
> Since it's all or nothing as you put it (and I AGREE) I can not see a single
> situation where your individual datafile backups could be used.
>
> > As I said already, a good datafile backup is a good datafile backup.
> > There is no dependency between individual datafile backpus within a
> > tablespace.
>
> It's a good backup, but how would it ever be used???

I usually try to steer clear of RTFM-type posts, but I think you really
ought to revisit Oracle's Backup and Recovery Guide. B&R is a huge
topic, and asking questions like the ones you've posed above tells me
you're probably due for a refresher course. But, you asked for an
example, so I'll give you one:

You have a disk that contains one and only one datafile, though that
datafile is but one of many in a particular tablespace. You backup
certain disks each day of the week (in this example). So on Monday you
get a backup of this one lone datafile, but none of the other datafiles
in that tablespace. On Tuesday you lose that disk. Disk is trashed,
that one datafile is lost. So you replace that disk with a spare, and
restore that datafile from Monday's backup. You recover that datafile
with all the archives generated since the backup, bring it online, and
you're back in business. You didn't need to restore any of the other
datafiles from that tablespace since they weren't affected by the media
failure.

Regards,
Sean M

Sean M

unread,
Jun 28, 2002, 6:41:58 PM6/28/02
to
Joe Salmeri wrote:

>
> And just how would you deal with the issues that Sean stated when he said:
>
> "Because, maybe, they have 12 TB databases which would require 6000
> datafiles to manage if they were only 2 GB apiece? You can start
> hitting OS barries with that many datafiles, like openfiles or, in our
> case, nflocks since we're on a NAS."

He'd probably tell you that I like to pull his chain (in good fun) now
and then with scenarios like these which are far and away the exception,
and by no means the rule. The above example is a real system, not a
pie-in-the-sky example, but it's also not a very common setup, and
defies much conventional wisdom. How big a database are you talking
about in your original question?

Regards,
Sean M

Howard J. Rogers

unread,
Jun 28, 2002, 6:48:19 PM6/28/02
to

"Joe Salmeri" <JoeSa...@comcast.net> wrote in message
news:gj5T8.447996$%y.318...@bin4.nnrp.aus1.giganews.com...

>
> "Howard J. Rogers" <d...@hjrdba.com> wrote in message
> news:afiga5$o8f$1...@lust.ihug.co.nz...

> > > Maybe this is true with 8i, maybe this is true with locally managed
> > > tablespaces, but it DEFINITELY did not used to be true.
> >
> > Oh yes it did. I chose my figures carefully. Go back to Oracle 7, and
you
> > won't find a performance difference between 30-extent segments, and
> 1-extent
> > segments.
> >
> > >I have seen
> > > terrible performance problems solved when the table was broken up into
a
> > > large number of extents. And no it had nothing to do with indexes,
they
> > > were rebuilt regularly. The only change in that situation was to
> > > consolidate the table into large extents. In that particular example
> > there
> > > were less than 100 extents and the performance increase was 10x
better.
> > >
> >
> > I hate to say I don't believe it. But I don't believe it. There's no
> > physical explanation you can come up with that justifies that sort of
> > peformance improvement. And Oracle isn't voodoo.
>
> I don't care if you believe me or not I'm not trying to convince you of
what
> really happened I know what really happened because I did the reorg and
> measured the before and after performance difference. It was not an
> isolated case either.

Well, put it this way then: I don't trust your capability to accurately
measure and (most importantly) explain a performance improvement, largely
because other posts in this thread indicate a complete ignorance of basic
backup and recovery techniques.

>
> > You originally asked about large files. The thread has been about you
not
> > needing large files. You *especially* (but not particularly) don't need
> them
> > with locally managed tablespaces. Therefore, a solution to your problem
is
> > to migrate to locally managed tablespaces, and you'll find that you
> needn't
> > worry in the slightest about large files ever again.
>

> And just how would you deal with the issues that Sean stated when he said:
>
> "Because, maybe, they have 12 TB databases which would require 6000
> datafiles to manage if they were only 2 GB apiece? You can start
> hitting OS barries with that many datafiles, like openfiles or, in our
> case, nflocks since we're on a NAS."
>

I would deal with it by being flexible, and by planning properly. Were I
planning a 12Tb database, I would ensure beforehand that it runs on an
operating system/ file system that didn't suffer from a 2Gb filesize
limitation, and move up to 4Gb files. Or 8Gb. Or anything else as the need
arises.

Recall that Oracle permits around 65000 datafiles in a database. You match
your database software with your hardware/file system.

Are *you* running a 12Tb database?

> > However, if you'd read carefully, you'd have noticed the point (which
you
> > choose not to believe, but that's up to you) that multiple extents in
> > dictionary managed tablespace is also not an issue, and hence the issue
> > similarly disappears whether you choose to use locally managed
tablespaces
> > or not, provided you don't go utterly beserk with the number of extents
in
> > dictionary managed tablespace. If you've a 4K block size, as you should
do
> > on Linux with a file system, then 250ish extents in dictionary managed
> > tablespace is not, and can not, be a problem.
>

> You seem to have missed an important point, I don't care whether my
> "opinion" is right or wrong, what I care about is the correct answer.
> It is not that I choose not to believe it, I saw first hand proof that it
> did make a difference with dictionary managed tablespaces.
>

Any performance improvement you saw was not the result of the reduction in
the number of extents.

I don't know how you did the re-organisation, obviously, but if you
exported, truncated and then imported, I hope that when you measured the
performance increase, you first bounced your instance. Otherwise, your
buffer cache would have been stuffed full of the data being selected, and
you'd have had a run of logical I/Os and not physical I/Os.

It's also possible that when you re-organised, you chose a new extent size
that was an exact multiple of your db_file_multiblock_read_count parameter,
where the old extents weren't. That would reduce the I/O too.

Did you control for these sorts of things in your testing? Or did you just
do a select before and after and deduce that the improvement "must have"
arisen from the reduction in the number of extents?

What you saw first hand is a performance improvement. The reasons for that
improvement are probably many and varied, but the number of extents has
nothing to do with it.

I have posted here not so very long ago a test I did on both dictionary and
locally managed tablespace, creating segments comprising 700+ extents or 12
extents, contiguous and non-contiguous. A full scan of the table took the
same amount of time regardless. A few hundred extents make no difference
(several hundred for locally managed). If you think otherwise, fine. But
thems just not the facts.

> > So what was all that nonsense about demanding 'contiguous extents' then?
> You
> > know, the bit where you said "Extents (initial allocations or next
> > allocations) MUST be contiguous"??
>

> It's not nonsense, Oracle is the one with the requirement that contiguous
> space must exist to create the extent.
>

Which you can achieve by not fragmenting a tablespace. It doesn't demand
that all extents are contiguous one with another, or be housed within the
one datafile.

> If I'm using hardware RAID Oracle does not know that the data is not
> residing on the same disk or that it is not really contiguous, as far as
it
> is concerned I have 1 large 500 gb hard drive.
>
> Bottom line: We are never going to agree, and I am still stuck trying to
> find the answer to my original question. Gee, thanks alot for your help.
>

You can lead a horse to water, but if it turns out to be a stubborn mule, it
will refuse to drink. You are barking up the wrong tree, with ideas which
have no basis in fact, and with evident lack of knowledge on backup and
recovery issues as well as performance tuning issues. If you're still stuck,
I can only try and suggest that you go and actually learn something about
Oracle internals and architecture.

HJR


Howard J. Rogers

unread,
Jun 28, 2002, 10:53:43 PM6/28/02
to

"Joe Salmeri" <JoeSa...@comcast.net> wrote in message
news:Fg_S8.173467$_j6.8...@bin3.nnrp.aus1.giganews.com...
> In 7+ years of managing Oracle databases

The past 7 years just seem to have flown by, for you, don't they?

Here's a tip: go away and learn something about Oracle backup and Recovery
procedures and techniques.

>I have only had one incident where
> the database became corrupt and that was because of a hardware failure,
not
> because of a bug in the operating system or in Oracle. I guess I have
just
> been EXTREMELY lucky.
>

No, Oracle is extremely robust. Operating systems are generally robust
enough. You've just discovered the truth that backups are a waste of time
...until you need them. That's all.

> In the scenerio you describe I would not have a warm comfort level
restoring
> that one datafile.

Yup. I always recommend my students shouldn't do anything unless they have
the warm and fuzzies about them. New versions? Pah! Not until you have the
warm and fuzzies. CLOBS instead of LONGS? Hah! No warm and fuzzies there.
And bugger all this block level recovery nonsense in 9i R2 - my friend Joe
doesn't feel comfortable about them, so they can't be any good.

>What would happen if there were tables in that bad
> tablespace that had referential integrity to other tables in another
> tablespace?

Oh, OK, I'll bite.

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination D:\oracle\ora92\RDBMS
Oldest online log sequence 36
Next log sequence to archive 38
Current log sequence 38

<...You'll note we are in archivelog mode...>

SQL> create tablespace parent datafile 'c:\parent1.dbf' size 1m;
Tablespace created.
SQL> create tablespace child datafile 'c:\child.dbf' size 1m;
Tablespace created.

SQL> create table p1 (col1 number primary key, col2 char(5)) tablespace
parent;
Table created.
SQL> create table c1 (col1 number primary key, col2 number) tablespace
child;
Table created.
SQL> alter table c1 add (constraint blah_child_fk foreign key(col2)
references p1(col1));
Table altered.

<.....So we have C1, in one tablespace, referencing P1 in another
tablespace.....>

SQL> insert into p1 values (42342,'Row 1' );
1 row created.
SQL> commit;
Commit complete.

SQL> insert into c1 values (2376487,42342);
1 row created.
SQL> commit;
Commit complete.

<...So we have one child row referencing one parent row...>

SQL> alter tablespace parent begin backup;
Tablespace altered.

<...Copy the file using O/S commands. Which means that the backup of the
parent table only contains a reference to the one record, 42342...>

SQL> alter tablespace parent end backup;
Tablespace altered.

SQL> insert into p1 values (276234,'row2');
1 row created.
SQL> commit;
Commit complete.

SQL> insert into c1 values (3746,276234);
1 row created.
SQL> commit;
Commit complete.

SQL> alter tablespace child begin backup;
Tablespace altered.
SQL> alter tablespace child end backup;
Tablespace altered.

<...Note that after other transactions have happened, we backup the child
table. So the backup for this table includes 2 rows. The backup for the
parent table only includes one row...>

SQL> shutdown abort
ORACLE instance shut down.
C:\del *.dbf

<....ooops.....>

SQL> connect / as sysddba
SQL> startup
ORACLE instance started.

Total System Global Area 135338868 bytes
Fixed Size 453492 bytes
Variable Size 109051904 bytes
Database Buffers 25165824 bytes
Redo Buffers 667648 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 16 - see DBWR trace file
ORA-01110: data file 16: 'C:\PARENT1.DBF'

<...use O/S commands to retrieve the backed-up copy, then...>

SQL> recover automatic datafile 16;
Media recovery complete.

<...have another go....>

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-01113: file 17 needs media recovery
ORA-01110: data file 17: 'C:\CHILD.DBF'

<....bummer. OK, use more O/S commands to restore the backup for file 17.
Then....>

SQL> recover automatic datafile 17;
Media recovery complete.
SQL> alter database open;
Database altered.

SQL> select * from p1;

COL1 COL2
---------- -----
42342 Row 1
276234 row2

SQL> select * from c1;

COL1 COL2
---------- ----------
2376487 42342
3746 276234

You will therefore note that although the two datafiles were backed up at
different times, both were successfully recovered. I leave it to you as an
exercise to work out what happens when you delete first one, then the other,
in either order (hint: it makes no difference, because I've just tested it
in case I might have been wrong).

This is *really* backup and recovery 101. It's very basic. It's been the
same since about version 6. There is nothing intellectually strenously
taxing about it.

>
> Not sure what the 4gb boundary you are referring to is, but I have had NT
> 4.0 filesystems that were 40 gb in size that never experienced any
problems.
>

If you are telling me that you created files on an NT system which were 40Gb
in size, I'd sack you, were you my DBA. I'd sack you anyway for exhibiting
precisely zero knowledge or understanding about Oracle backup and recovery
procedures. I'd sack you, too, because you can't seem to control for all the
variables in a performance tuning test (see other posts).

Just as well I'm not your boss. And God help me if you're ever mine, because
I can see long evenings of 'please explain' yawning before me were it ever
so.

HJR


>
> "Niall Litchfield" <n-litc...@audit-commission.gov.uk> wrote in message
> news:3d1c6b90$0$231$ed9e...@reading.news.pipex.net...

Message has been deleted
Message has been deleted

Howard J. Rogers

unread,
Jun 28, 2002, 11:34:39 PM6/28/02
to

"Joe Salmeri" <JoeSa...@comcast.net> wrote in message
news:gA9T8.40403$Ca2.2...@bin2.nnrp.aus1.giganews.com...
> Howard,
>
> Thank you! That is the level of detail that I needed to better understand
> your position.
>
> Of the 3 backup options available (cold, hot, export) I have always been
> fortunate in being able to do a cold backup as the primary backup and an
> export as the fallback option, therefore I have not really had the need or
> opportunity to work with the hot backup options.
>
> One question regarding your example on Wednesday when data file 1 blows
up:
>
> After restoring data file 1 is at SCN 17000 and data file 2 (and the rest
of
> the database) is at SCN 18634.
>
> Before you recover datafile 1 you realize that a change was made to the
> database after the data file 1 backup (SCN 17000) and before the database
> was at SCN 18634. Let's say that you determine that you want to restore
to
> 9am on Wednesday and the SCN at that time was 17985.
>
> From your example I can see that data file 1 can be recovered from 17000
up
> to 17985 (9am on Wednesday) but how is data file 2 handled since it has
an
> SCN greater than the point in time that you want to recover too? Do you
> need to find a backup of datafile 2 that is BEFORE the Tuesday backup (SCN
> 18000) or will the recovery process back out those additional changes?

No, you can only ever roll a datafile FORWARD. Whenever you feel the
temptation to roll a file backwards, that's a cue for an 'INCOMPLETE
RECOVERY'. That means you restore every single data file (the 'all or
nothing' option Sean was talking about') from the last available backups,
and roll them forward until time 17985.

That gets every datafile consistent with every other.

But, unfortunately, that doesn't get the data files consistent with the
Control File (which is still there, happily thinking the time is 18634).

Therefore, you issue the magic command 'alter database open resetlogs' at
the end of the recovery phase. That gets all the datafiles consistent with
the controlfile by the simple expedient of setting the clock back to 0.
(Actually, and just in case Sean M is watching, the SCN is itself not reset,
the log sequence number is reset. But that's a technical truth: the
essential point is, the database now thinks time has just started).

And since it thinks time has just begun, it's oblivious to the existence of
prior backups, or prior archives. Therefore, *all* prior backups and
archives are totally and utterly worthless. They cannot be used to recover
what is, in effect, a new database. Therefore, you are supposed to
immediately shut down the database and start a new cold backup.

However, that's a pain, and lots of people do new *hot* backups. Which is
fine, so long as the backup cycle completes successfully. Until it does, you
have no protection for your database at all. (And again, just in case, it is
true that you *can* use prior backups and archives in extremis, but it isn't
pleasant, and you don't want to go there if you can avoid it).

So resetlogs are EXCEEDINGLY expensive. To the point where the only time
you'd ever want to use an incomplete recovery is when everything else has
failed.

Incompletes are very rare events. They are awkward, difficult to perform,
and at the mercy of Sod's Law ("If anything can go wrong, it will"). You
don''t want to go there unless absolutely necessary.

But if you do, it's not a sweat. Oracle copes. There's not a recovery
situation it *can't* cope with, one way or another. Which sounds like
marketing bullshit but (and here's the true majesty of the product) isn't.

HJR


>
> Since the other data files would also be at SCN 18634 at the point that
you
> recovered data file 1 I would expect that they would be in the same
> situation as data file 2? (either it backs out the changes or I need to
> find a older backup of those data files too).


>
> "Howard J. Rogers" <d...@hjrdba.com> wrote in message

> news:afins3$uqh$1...@lust.ihug.co.nz...

Sean M

unread,
Jun 29, 2002, 12:07:11 AM6/29/02
to
"Howard J. Rogers" wrote:
>
> So resetlogs are EXCEEDINGLY expensive. To the point where the only time
> you'd ever want to use an incomplete recovery is when everything else has
> failed.
>
> Incompletes are very rare events. They are awkward, difficult to perform,
> and at the mercy of Sod's Law ("If anything can go wrong, it will"). You
> don''t want to go there unless absolutely necessary.

I snipped plenty of good advice/description, all of which I agree with.
My only contribution to the above would be to not color incomplete
recoveries in such a dark light. They certainly aren't your first
choice usually, but they aren't the end of the world and, IMO, aren't
much more awkward/difficult than a complete recovery. But that's more
opinion than fact.



> But if you do, it's not a sweat. Oracle copes. There's not a recovery
> situation it *can't* cope with, one way or another. Which sounds like
> marketing bullshit but (and here's the true majesty of the product) isn't.

:) One of the reasons I like B&R so much... it just works.

Regards,
Sean M

Sean M

unread,
Jun 29, 2002, 12:09:46 AM6/29/02
to
Joe Salmeri wrote:
>
> Thanks for the explaination. I have pulled out the B&R guide because you
> are correct it has been quite a while.

Great, and I hope you don't feel like we were trying to chase you off
the group - it's just that B&R is a serious subject, and one of the
least forgiving of mistakes. You want to make sure you "get it" before
you do it for real. You can screw up all kinds of things in Oracle, but
if you have a reliable backup, you're still OK. Without the backup...
scary.

Regards,
Sean M

Howard J. Rogers

unread,
Jun 29, 2002, 12:16:28 AM6/29/02
to

"Sean M" <smck...@earthlink.net> wrote in message
news:3D1D321A...@earthlink.net...

> "Howard J. Rogers" wrote:
> >
> > So resetlogs are EXCEEDINGLY expensive. To the point where the only time
> > you'd ever want to use an incomplete recovery is when everything else
has
> > failed.
> >
> > Incompletes are very rare events. They are awkward, difficult to
perform,
> > and at the mercy of Sod's Law ("If anything can go wrong, it will"). You
> > don''t want to go there unless absolutely necessary.
>
> I snipped plenty of good advice/description, all of which I agree with.
> My only contribution to the above would be to not color incomplete
> recoveries in such a dark light. They certainly aren't your first
> choice usually, but they aren't the end of the world and, IMO, aren't
> much more awkward/difficult than a complete recovery. But that's more
> opinion than fact.

You must be one of those rare souls that remembers to take a preventative
backup of their control files and redo logs before starting. 'Cos otherwise,
if the recovery doesn't manage to bring back what you hoped it would, you're
stuffed. Then there's the little matter of "FILE 1 NEEDS MORE RECOVERY TO BE
CONSISTENT', meaning.... er. what the stuff did I restore, and why didn't I
restore everything??? And so on.

I can pull off an incomplete recovery 100% of the time I go to demo one
these days (but it wasn't always like that!!). They still give me the
eebie-jeebies, though: stuff it up, and there's no come-back.

On the other hand, I agree with you: if you follow standard procedures,
religiously, then there's nothing intrinsically so awful about them as to
make you want to chop your head off.

In truth, their awfulness comes from the fact that the database MUST be down
for the duration, and they are therefore very expensive in terms of
concurrent access to data. I suppose it's a minor inconsequential matter
that they happen, also, to result in the loss of perfectly good data!

At the end of the day, though, I know what you are saying, and agree with
it.

Regards
HJR

Message has been deleted

Howard J. Rogers

unread,
Jun 29, 2002, 12:43:31 AM6/29/02
to
Mmmm. However, professionalism is in the eye of the beholder.

Consider these choice quotes:

"I don't just blindly post requests for help until I have exhausted all
other
possibilities." Swiftly followed by " I have pulled out the B&R guide


because you
are correct it has been quite a while."

"The backup you suggest is worthless to me."

"In your mind."

Never mind the classic "the performance increase was 10x better" after
making a 100-extent table a 1-extent table.

I won't point out who made all these prime comments, nor what the context of
each was. You know who you are, and can read a thread.

HJR

"Joe Salmeri" <JoeSa...@comcast.net> wrote in message

news:aGaT8.500349$Oa1.33...@bin8.nnrp.aus1.giganews.com...
> Sean,
>
> Thanks for your professionalism and proper courtesy in responding it is
very
> much appreciated.


>
> Of the 3 backup options available (cold, hot, export) I have always been
> fortunate in being able to do a cold
> backup as the primary backup and an export as the fallback option,
therefore
> I have not really had the need or

> opportunity to work with the hot backup options. I'm sure that is not the
> norm, but I always had the luxury of
> taking the database down at night for a full backup and it certainly is
alot
> simplier to manage that way.
>
> Regards,
>
> Joe


>
>
> "Sean M" <smck...@earthlink.net> wrote in message

> news:3D1D32C9...@earthlink.net...

Sean M

unread,
Jun 29, 2002, 12:55:12 AM6/29/02
to
"Howard J. Rogers" wrote:
>
> "Sean M" <smck...@earthlink.net> wrote...

> >
> > I snipped plenty of good advice/description, all of which I agree with.
> > My only contribution to the above would be to not color incomplete
> > recoveries in such a dark light. They certainly aren't your first
> > choice usually, but they aren't the end of the world and, IMO, aren't
> > much more awkward/difficult than a complete recovery. But that's more
> > opinion than fact.
>
> You must be one of those rare souls that remembers to take a preventative
> backup of their control files and redo logs before starting. 'Cos otherwise,
> if the recovery doesn't manage to bring back what you hoped it would, you're
> stuffed.

How so? I mean, what do I need redo logs for during an incomplete
recovery (assuming you meant online redos)? If you meant offline redos
(i.e. archive logs), why would I need to back them up before I start?
And I can always just recreate a controlfile... no real need to back it
up before I begin. Unless... were you talking about complete
recoveries? In that case I agree, yes, backup everything before your
start - but I thought we were talking about incomplete recoveries.

> Then there's the little matter of "FILE 1 NEEDS MORE RECOVERY TO BE
> CONSISTENT', meaning.... er. what the stuff did I restore, and why didn't I
> restore everything??? And so on.
>
> I can pull off an incomplete recovery 100% of the time I go to demo one
> these days (but it wasn't always like that!!). They still give me the
> eebie-jeebies, though: stuff it up, and there's no come-back.

Sure there is! Just re-restore your backup, and start again. Might
cost you some time if the DB is big, but certainly possible.



> On the other hand, I agree with you: if you follow standard procedures,
> religiously, then there's nothing intrinsically so awful about them as to
> make you want to chop your head off.
>
> In truth, their awfulness comes from the fact that the database MUST be down
> for the duration, and they are therefore very expensive in terms of
> concurrent access to data. I suppose it's a minor inconsequential matter
> that they happen, also, to result in the loss of perfectly good data!

True enough.


> At the end of the day, though, I know what you are saying, and agree with
> it.

Yes, I think we're in violent agreement once again. ;)

Regards,
Sean M

Howard J. Rogers

unread,
Jun 29, 2002, 1:17:07 AM6/29/02
to

"Sean M" <smck...@earthlink.net> wrote in message
news:3D1D3D4E...@earthlink.net...

> "Howard J. Rogers" wrote:
> >
> > "Sean M" <smck...@earthlink.net> wrote...
> > >
> > > I snipped plenty of good advice/description, all of which I agree
with.
> > > My only contribution to the above would be to not color incomplete
> > > recoveries in such a dark light. They certainly aren't your first
> > > choice usually, but they aren't the end of the world and, IMO, aren't
> > > much more awkward/difficult than a complete recovery. But that's more
> > > opinion than fact.
> >
> > You must be one of those rare souls that remembers to take a
preventative
> > backup of their control files and redo logs before starting. 'Cos
otherwise,
> > if the recovery doesn't manage to bring back what you hoped it would,
you're
> > stuffed.
>
> How so? I mean, what do I need redo logs for during an incomplete
> recovery (assuming you meant online redos)? If you meant offline redos
> (i.e. archive logs), why would I need to back them up before I start?
> And I can always just recreate a controlfile... no real need to back it
> up before I begin. Unless... were you talking about complete
> recoveries? In that case I agree, yes, backup everything before your
> start - but I thought we were talking about incomplete recoveries.


We were. Recover until time 10.00am, when you were told Scott dropped the
SALES table. Open resetlogs. Bugger me.... SALES is still missing.

Now repeat the incomplete recovery.

You can restore the datafiles, for sure. But your control file now thinks
its time 0. Your online redo logs think likewise.

IE, before the recovery, you had:

Control: time M
Datafiles: time M
Online Redo : time M

Restore the datafiles:

Control: M
Datafiles: F
Online Redo : M

Perform Recovery:

Control : M
Datafiles : L
Online Redo : M

Open resetlogs:

Control : A
Datafiles : A
Online Redo: A

Now repeatr the recovery....

Control A
Datafiles F
Online redo A.

Your control file and redo logs are from a time *before* the datafiles.
Rather worse, they are from a completely different incarnation of the
database. You won't be able to repeat the recovery.

However, if you've backed up the original control files and redo logs first,
then given:

Control: A
Datafiles: A
Online Redo : A

You can restore the precautionary backup, to give this:

Control : M
Datafiles: A
Online Redo : M

Now re-restore the datafiles:

Control : M
Datafiles: F
Online Redo : M

And "M-F-M" is the state we were in to perform the initial recovery. So now
you can recover until time "K".

Without being able to restore the initial control files or online redo logs,
you cannot repeat an incomplete recovery if it proves that the first attempt
sailed straight past the dodgy redo.


>
> > Then there's the little matter of "FILE 1 NEEDS MORE RECOVERY TO BE
> > CONSISTENT', meaning.... er. what the stuff did I restore, and why
didn't I
> > restore everything??? And so on.
> >
> > I can pull off an incomplete recovery 100% of the time I go to demo one
> > these days (but it wasn't always like that!!). They still give me the
> > eebie-jeebies, though: stuff it up, and there's no come-back.
>
> Sure there is! Just re-restore your backup, and start again. Might
> cost you some time if the DB is big, but certainly possible.

You can't do that. If you restore the datafiles from the "proper" backup,
you are stuck with a control file and online logs from a new incarnation. If
you restore *everything* from the previous backup, you've possibly lost
transactions in the current log you wanted to re-perform.

'Fraid the manuals are unambiguous on the subject, as be me: if you don't
take a precautionary backup of your control files and online logs before
beginning an incomplete recovery, you cannot repeat said recovery when you
find that it didn't bring back what you wanted it to.

Regards
HJR

Sean M

unread,
Jun 29, 2002, 1:35:47 AM6/29/02
to

Sure you can... just recreate the controlfile and delete the online redo
logs. Re-restore your datafiles, and you're off and running again.

> However, if you've backed up the original control files and redo logs first,
> then given:
>
> Control: A
> Datafiles: A
> Online Redo : A
>
> You can restore the precautionary backup, to give this:
>
> Control : M
> Datafiles: A
> Online Redo : M
>
> Now re-restore the datafiles:
>
> Control : M
> Datafiles: F
> Online Redo : M
>
> And "M-F-M" is the state we were in to perform the initial recovery. So now
> you can recover until time "K".
>
> Without being able to restore the initial control files or online redo logs,
> you cannot repeat an incomplete recovery if it proves that the first attempt
> sailed straight past the dodgy redo.

That's not correct. First of all you can just throw away your online
redo logs if you're doing an incomplete recovery - they're worthless
since they contain information newer than the point to which you're
recovering. So the online redo part of your argument doesn't stand up.
As for the controlfile, just recreate it like I said in my last post.

> > Sure there is! Just re-restore your backup, and start again. Might
> > cost you some time if the DB is big, but certainly possible.
>
> You can't do that.

Sure I can, and I have more than once!

> If you restore the datafiles from the "proper" backup,
> you are stuck with a control file and online logs from a new incarnation.

So what? Blow 'em away, recreate the controlfile, and I'm fine.

> If
> you restore *everything* from the previous backup, you've possibly lost
> transactions in the current log you wanted to re-perform.

Transactions in the current log? We're talking incomplete recovery here
- we never get to the online redos. Why do we need them? Delete them,
and let the next open resetlogs re-recreate them.



> 'Fraid the manuals are unambiguous on the subject, as be me: if you don't
> take a precautionary backup of your control files and online logs before
> beginning an incomplete recovery, you cannot repeat said recovery when you
> find that it didn't bring back what you wanted it to.

I totally disagree - UNLESS you're talking about an incomplete recovery
to an extremely recent point in time, so recent that there is still data
in the online redos that you're rolling forward onto. But I was pretty
sure we were talking about rolling back in time a few hours/days/weeks,
where we only apply *archived* redo to catch up. But in general, for an
incomplete recovery, Oracle never even reads the online redo logs, so it
will never complain that they're from a different time or incarnation.

Regards,
Sean

Sean M

unread,
Jun 29, 2002, 1:45:01 AM6/29/02
to
Addendum: Lest you think I'm being daft, I'll give you a concrete
example. Take a backup of a database at noon Monday. Continue
transactions until noon Tuesday. Decide you want the database restored
to the way it looked at 9 a.m. Tuesday. Blow away EVERYTHING you have
for the current database - control files, online redo logs, datafiles -
everything. Restore your datafiles from Monday's backup. Recreate your
controlfile. Now roll forward to 9 a.m. Open resetlogs, and voila,
happy database (Oracle automatically creates your online redos). I
think we're in agreement up to here.

Oops! You really wanted to roll forward to 10 a.m. Tuesday you say?
OK, blow everything away again - everything. Re-restore your datafiles
from Monday's backup. Recreate your controlfile (again). Roll forward
to 10 a.m. this time. Open resetlogs. There - I've just repeated a 2nd
incomplete recovery with no controlfile or redo log backup after
stuffing up my first attempt. What am I missing? I've done exactly
this before, and it works just fine.

Regards,
Sean M

Message has been deleted

Howard J. Rogers

unread,
Jun 29, 2002, 1:51:38 AM6/29/02
to

"Sean M" <smck...@earthlink.net> wrote in message
news:3D1D46E3...@earthlink.net...

Not so sure I would want to delete the online redo logs, are you? You meant
that repeating the recovery, followed by a resetlogs, will recreate them.
Fair enough. But what happens if the user error you are trying to avoid
repeating was in the online redo logs? So you want to roll forward partway
through (what was) the current log, and stop short of the end.

You've just proposed deleting them. Guess I can't roll forward through what
they contained then.

Who said? An incomplete recovery means stop short of the end of redo log
marker. It says nothing about "I don't want the current log AT ALL".

>So the online redo part of your argument doesn't stand up.

Oh yes it does.

> As for the controlfile, just recreate it like I said in my last post.

Yeah, just re-create it. 'Xcept your onlines aren't as disposable as you
might like. And recreating the controlfile using the 'backup to trace'
script means that Oracle scans the datafiles for the highest SCN and forces
the new controlfile to agree to that SCN. Which means that there is no
incompatibility between the datafiles and the controlfile. Which means that
trying a repeat incomplete recovery gets, er, shall we say "interesting".

>
> > > Sure there is! Just re-restore your backup, and start again. Might
> > > cost you some time if the DB is big, but certainly possible.
> >
> > You can't do that.
>
> Sure I can, and I have more than once!
>
> > If you restore the datafiles from the "proper" backup,
> > you are stuck with a control file and online logs from a new
incarnation.
>
> So what? Blow 'em away, recreate the controlfile, and I'm fine.

Yup. Who cares about online redo logs anyway. They only contain committed
transactions I'd like to re-perform!!

>
> > If
> > you restore *everything* from the previous backup, you've possibly lost
> > transactions in the current log you wanted to re-perform.
>
> Transactions in the current log? We're talking incomplete recovery here
> - we never get to the online redos. Why do we need them? Delete them,
> and let the next open resetlogs re-recreate them.
>

Bollocks. Incomplete recoveries do not mean "don't open the current log".
They mean "don't replay all redo". Quite a different matter.

> > 'Fraid the manuals are unambiguous on the subject, as be me: if you
don't
> > take a precautionary backup of your control files and online logs before
> > beginning an incomplete recovery, you cannot repeat said recovery when
you
> > find that it didn't bring back what you wanted it to.
>
> I totally disagree - UNLESS you're talking about an incomplete recovery
> to an extremely recent point in time,

Oh, ah. So now we realise that the onlines are actually quite worthwhile,
because we're trying to recover to a "recent" point in time. Might surprise
you to know that an online redo log can contain anywhere up to 24
hours-worth of transactions in databases I'm associated with (I dislike log
switches because of the checkpoints and performance degradation they cause).
Even when it's not that extreme, you aren't (I hope) postulating a law that
says 'recoveries shall not be to 15 minutes ago'.

>so recent that there is still data
> in the online redos that you're rolling forward onto.

Ah hah! He's got it. By George he's got it. (Stop me doing the whole My Fair
Lady bit).

>But I was pretty
> sure we were talking about rolling back in time a few hours/days/weeks,

Why?

> where we only apply *archived* redo to catch up. But in general, for an
> incomplete recovery, Oracle never even reads the online redo logs, so it
> will never complain that they're from a different time or incarnation.

How in God's name can you make that statement? "In general"... what does
that mean? "Excuse me, Mr DBA, I just dropped a table". "When did you do
it?" "Er, 15 minutes ago". "Too recent, sod off".

I don't think so.

"In general", an incomplete recovery is not repeatable without a backup of
the controlfiles and online redo logs.

Regards
HJR

>
> Regards,
> Sean


Howard J. Rogers

unread,
Jun 29, 2002, 1:56:29 AM6/29/02
to
You're missing the small fact that (presumably) 10am is in the archive logs,
just as 9am was. Had it been in the current logs, you'd be well and truly up
a certain creek without a certain navigational and propulsional instrument.

You've been lucky. The general case remains: backup controlfiles and redo
logs *if* you wish to be able to perform a perfect repeat of the recovery.

Regards
HJR


"Sean M" <smck...@earthlink.net> wrote in message

news:3D1D4910...@earthlink.net...

Howard J. Rogers

unread,
Jun 29, 2002, 2:07:18 AM6/29/02
to

"Joe Salmeri" <JoeSa...@comcast.net> wrote in message
news:JPbT8.504153$Oa1.33...@bin8.nnrp.aus1.giganews.com...

> "Howard J. Rogers" <d...@hjrdba.com> wrote in message
> news:afjdvd$kl0$1...@lust.ihug.co.nz...

> > Mmmm. However, professionalism is in the eye of the beholder.
> >
> > Consider these choice quotes:
> >
> > "I don't just blindly post requests for help until I have exhausted all
> > other
> > possibilities." Swiftly followed by " I have pulled out the B&R guide
> > because you
> > are correct it has been quite a while."
>
> Totally out of context.

Uh huh. "I've exhausted all other possibilites". And "I haven't read that
manual in quite a while". Yup, I see the context alright.

>There is no relationship between the two comments
> at all. The original comment was made because I had searched for several
> days for an answer to my original question (which has really never been
> answered BTW) and was basically told to do a Google search (my first step
in
> the research).

Another reason I'd sack you if you were my DBA. You go to Google *before*
reading the manuals?

>
> The second was made regarding a topic that really has nothing to do with
the
> original question. Of the 3 available backup options, I don't use the one
> that you commonly use, I have always used the other two.


>
> > "The backup you suggest is worthless to me."
>

> I see nothing wrong with saying that I do not see the value in the
> suggestion made.

I see nothing wrong with someone saying "I don't do that sort of backup, but
thanks for the tip". I see everything wrong with "your suggestion is
worthless", which you came perilously close to saying.

>All that means is that I do not see or understand what
> value it provides.
>

Shame you couldn't say it that way, then.

> > "In your mind."
>
> I don't see anything wrong with having a different opinion than someone
> else.

Strangely enough, what Oracle does with extents, and how it reads them,
isn't a matter of "opinion". It's measurable, testable, and demonstrable. So
it wasn't "in my mind" (dismissive) but "not in my experience" (open to
debate).

>It was even made in response to message that did not agree with my
> opinion.


>
> > I won't point out who made all these prime comments, nor what the
context
> of
> > each was. You know who you are, and can read a thread.
>

> Try as you may I will not stoop to your low level and quote all of the
> arrogant comments you have made. It's just not worth wasting my time.


I don't want you to quote "arrogant comments". I'd like you to actually
*learn* something.

But feel free to go ahead and make my day.

All I have done in this entire thread is to make the following unarguable
comments:

1. the number of extents is of no importance until you get into the
thousands (several hundreds if dictionary managed)
2. therefore striving officously to make a segment fit in one extent is
pointless
3. given a table doesn't need to fit in one extent, you have no need of
enormous datafiles
4. enormous datafiles are not good for database management, flexibility or
administrative options
5. you don't seem to know very much about Oracle internals, or backup and
recovery principles, so I don't rate your ability to performance test
reliably very highly.

Now.... which one of those would you like to take issue with?

HJR

Sean M

unread,
Jun 29, 2002, 2:29:39 AM6/29/02
to
"Howard J. Rogers" wrote:
>
> Not so sure I would want to delete the online redo logs, are you? You meant
> that repeating the recovery, followed by a resetlogs, will recreate them.
> Fair enough. But what happens if the user error you are trying to avoid
> repeating was in the online redo logs? So you want to roll forward partway
> through (what was) the current log, and stop short of the end.
>
> You've just proposed deleting them. Guess I can't roll forward through what
> they contained then.
>
> Who said? An incomplete recovery means stop short of the end of redo log

Fair enough.

> >So the online redo part of your argument doesn't stand up.
>
> Oh yes it does.
>
> > As for the controlfile, just recreate it like I said in my last post.
>
> Yeah, just re-create it. 'Xcept your onlines aren't as disposable as you
> might like. And recreating the controlfile using the 'backup to trace'
> script means that Oracle scans the datafiles for the highest SCN and forces
> the new controlfile to agree to that SCN. Which means that there is no
> incompatibility between the datafiles and the controlfile. Which means that

Not sure I follow... after I recreate a controlfile I can certainly roll
forward past the highest SCN in the datafiles. I can roll forward
through every single redo log in my possesion with a recreated
controlfile.

> Yup. Who cares about online redo logs anyway. They only contain committed
> transactions I'd like to re-perform!!
>

> Bollocks. Incomplete recoveries do not mean "don't open the current log".
> They mean "don't replay all redo". Quite a different matter.
>

> Oh, ah. So now we realise that the onlines are actually quite worthwhile,
> because we're trying to recover to a "recent" point in time. Might surprise
> you to know that an online redo log can contain anywhere up to 24
> hours-worth of transactions in databases I'm associated with (I dislike log
> switches because of the checkpoints and performance degradation they cause).
> Even when it's not that extreme, you aren't (I hope) postulating a law that
> says 'recoveries shall not be to 15 minutes ago'.

No, just accustomed to databases with much more frequent switching.

> Ah hah! He's got it. By George he's got it. (Stop me doing the whole My Fair
> Lady bit).

Bravo Prof. Higgins. ;) G.B. Shaw would be proud.

> >But I was pretty
> > sure we were talking about rolling back in time a few hours/days/weeks,
>
> Why?

Because that's the time period we've been using in examples througout
this thread (backup Monday, restore Tuesday, etc...). And that's why I
added a big "UNLESS" to my post - to be sure you didn't have a different
example in mind.



> > where we only apply *archived* redo to catch up. But in general, for an
> > incomplete recovery, Oracle never even reads the online redo logs, so it
> > will never complain that they're from a different time or incarnation.
>
> How in God's name can you make that statement? "In general"... what does
> that mean? "Excuse me, Mr DBA, I just dropped a table". "When did you do
> it?" "Er, 15 minutes ago". "Too recent, sod off".
>
> I don't think so.

In practice, most incomplete recoveries of production databases roll
back to a point in time earlier than the data contained in the online
redos. That's been my experience, and I bet most others' that do this
on real production systems. Why? Because most real production systems
switch logs rather frequently (say, a few times per hour), and it
usually takes longer than that to 1) discover you need to roll back and
2) convince management this is important enough to stop production
activity and 3) shutdown the database. I'd submit that your example of
databases with one log switch a day is the exception, not the rule.
That's why I made the "in general" statement. But I'll concede that,
for an absolute rule you can apply to *all* incomplete recoveries, you
should backup your online redos before you begin.

> "In general", an incomplete recovery is not repeatable without a backup of
> the controlfiles and online redo logs.

I disagree that you need backup of the crontrolfile since you can
recreate it. But ok, "in general." :)

Regards,
Sean M

Howard J. Rogers

unread,
Jun 29, 2002, 2:40:46 AM6/29/02
to

"Sean M" <smck...@earthlink.net> wrote in message
news:3D1D5377...@earthlink.net...

Poor you.

>
> > Ah hah! He's got it. By George he's got it. (Stop me doing the whole My
Fair
> > Lady bit).
>
> Bravo Prof. Higgins. ;) G.B. Shaw would be proud.
>
> >But I was pretty
> > > sure we were talking about rolling back in time a few
hours/days/weeks,
> >
> > Why?
>
> Because that's the time period we've been using in examples througout
> this thread (backup Monday, restore Tuesday, etc...).

Yeah, but if you restore Tuesday, doesn't mean you have to *recover* all of
it.

>And that's why I
> added a big "UNLESS" to my post - to be sure you didn't have a different
> example in mind.
>
> > > where we only apply *archived* redo to catch up. But in general, for
an
> > > incomplete recovery, Oracle never even reads the online redo logs, so
it
> > > will never complain that they're from a different time or incarnation.
> >
> > How in God's name can you make that statement? "In general"... what does
> > that mean? "Excuse me, Mr DBA, I just dropped a table". "When did you do
> > it?" "Er, 15 minutes ago". "Too recent, sod off".
> >
> > I don't think so.
>
> In practice, most incomplete recoveries of production databases roll
> back to a point in time earlier than the data contained in the online
> redos. That's been my experience, and I bet most others' that do this
> on real production systems. Why? Because most real production systems
> switch logs rather frequently (say, a few times per hour),

I hope not. But perhaps things are different south of the equator. Most DBAs
I know of here would switch every hour or so. And I disapprove (since they
are only protecting themselves against Instance failure by doing so, and I
happen to think that hardware and most O/Ses, not to mention UPSes have
gotten to the point where I hope we can start to stop worrying about
instance failures).

>and it
> usually takes longer than that to 1) discover you need to roll back and
> 2) convince management this is important enough to stop production
> activity and 3) shutdown the database. I'd submit that your example of
> databases with one log switch a day is the exception, not the rule.

I agree (but I'm working to change opinion!).

> That's why I made the "in general" statement. But I'll concede that,
> for an absolute rule you can apply to *all* incomplete recoveries, you
> should backup your online redos before you begin.
>

Great. As ever, we agree. Eventually.

> > "In general", an incomplete recovery is not repeatable without a backup
of
> > the controlfiles and online redo logs.
>
> I disagree that you need backup of the crontrolfile since you can
> recreate it. But ok, "in general." :)

OK, I'll fess up. The reason I don't like the idea of not backing up the
controlfile is not that you can't recreate it as you state, but that to do
so requires either (a) enormous skill at knocking up a 'create controlfile'
statement on the fly (doubtful) or (b) using the 'backup to trace' script.
And the trouble with (b) is that that script (c) can be out of date unless
you backup to trace routinely (which I always recommend, but we can't always
be perfect), requiring manual editing; or (d) proposes to perform a complete
recovery for you, so that you either (e) have to edit it to remove or alter
the 'recover database' line it contains and add in a reference to
'resetlogs' or (f) accept that you get a failed recovery.

It is *sooooo* much easier just to restore from a precautionary backup, and
take control of the exercise.

But again, we'll agree that it is possible to recreate the controlfile in
the event that you don't mind waltzing through the alphabet.

;-)
HJR

>
> Regards,
> Sean M


Sean M

unread,
Jun 29, 2002, 3:06:18 AM6/29/02
to
"Howard J. Rogers" wrote:
>
> "Sean M" <smck...@earthlink.net> wrote ...
>
> > In practice, most incomplete recoveries of production databases roll
> > back to a point in time earlier than the data contained in the online
> > redos. That's been my experience, and I bet most others' that do this
> > on real production systems. Why? Because most real production systems
> > switch logs rather frequently (say, a few times per hour),
>
> I hope not. But perhaps things are different south of the equator. Most DBAs
> I know of here would switch every hour or so. And I disapprove (since they
> are only protecting themselves against Instance failure by doing so, and I
> happen to think that hardware and most O/Ses, not to mention UPSes have
> gotten to the point where I hope we can start to stop worrying about
> instance failures).

[What the hell, more thread drift...]

I'd postulate at least 2 other reasons for more frequent log switches
than once a day. First, for many of our databases (and in many other
large shops out there), we generate 50-200 GB of redo a day. Making
50-200 GB online redo logs really isn't very practical for backup and
recovery, for many of the same reasons we've seen in this thread for not
using large datafiles (i.e. higher granularity of the recovery process
is usually beneficial). Secondly, even if we did only switch once a day
for these databases, that would incur one BIG checkpoint (by big I mean
a LOT of dirty buffers being written to datafiles, all at once). That
means once a day we get a HUGE performance spike, as opposed to many
smaller spikes (a few times per hour). For 24/7 systems, we can't
really tell our users that everything runs great except for that one
45-minute period where you can't do a darn thing because, sorry, we're
checkpointing. But we can tell them that everything runs pretty well
all day long.

And I wouldn't understate the frequency of instance crashes. If they
weren't still a problem with modern hardware we wouldn't see such an
interest in RAC. Bugs, hung processes, memory leaks, cpu failures, you
name it - instance failure isn't going away anytime soon. I think
you're once-a-day switch idea will be a hard sell in production, but
don't let me stop you. :)

Regards,
Sean

Howard J. Rogers

unread,
Jun 29, 2002, 3:25:16 AM6/29/02
to

"Sean M" <smck...@earthlink.net> wrote in message
news:3D1D5C0E...@earthlink.net...


Drift on by.....

Well, here comes a statement. True 24x7s are very rare. There's
usually/frequently/often a sluggish window. And yes, I'd be aiming to get my
log switches into there.

File sizes... can't do anything about them, agreed. But I always try and
create 2Gb redo logs and have done with it.

Unless they don't have redundant power supplies and a UPS.

And if they're running on NT 4.0, forget it. Every 20 minutes or so in that
case!

> And I wouldn't understate the frequency of instance crashes. If they
> weren't still a problem with modern hardware we wouldn't see such an
> interest in RAC.

(Psst. Between you and me, we see a lot of interest in RAC because Oracle
wants it that way. All hail the marketing department. Call me a cynic, if
you must).

Rgds
HJR

Sean M

unread,
Jun 29, 2002, 3:40:12 AM6/29/02
to
"Howard J. Rogers" wrote:
>
> Well, here comes a statement. True 24x7s are very rare. There's
> usually/frequently/often a sluggish window.

Very true.

> And yes, I'd be aiming to get my
> log switches into there.

You could aim for it, but you'd eventually miss. You couldn't hit the
target by simply sizing your redo logs, since activity will vary - some
days you get 50 GB, others 200. So your window would start creeping all
over the 24 hour clock, left to its own devices. You'd have to
institute a job to force a log switch in the slower hours I suppose...
eh. Do-able I guess, but kinda messy - you're back to checkpointing at
a time interval, not a data interval, which is not my cup of tea.



> File sizes... can't do anything about them, agreed. But I always try and
> create 2Gb redo logs and have done with it.
>
> Unless they don't have redundant power supplies and a UPS.
>
> And if they're running on NT 4.0, forget it. Every 20 minutes or so in that
> case!

Yeah, and still cross your fingers, make a daily sacrifice to the
Redmond, WA gods, ...



> > And I wouldn't understate the frequency of instance crashes. If they
> > weren't still a problem with modern hardware we wouldn't see such an
> > interest in RAC.
>
> (Psst. Between you and me, we see a lot of interest in RAC because Oracle
> wants it that way. All hail the marketing department. Call me a cynic, if
> you must).

All hail the power of Oracle Marketing! Keeps us all employed - we
should be sending them cash I suppose...

Goodnight, (or I guess that should be "G'day")
Sean M

Sebastiano Pilla

unread,
Jun 29, 2002, 3:21:25 PM6/29/02
to
Joe Salmeri <JoeSa...@comcast.net> wrote:

> I don't back up my tablespaces I back up the data they contain. I don't
> restore tablespaces in their entirety I would restore the objects that I
> need to recover.

If that's the case, you should need to investigate a logical backup
using exp (and its counterpart imp for importing) rather than physical
backups.

Sebastiano Pilla

Paul Brewer

unread,
Jun 29, 2002, 4:29:36 PM6/29/02
to
"Howard J. Rogers" <d...@hjrdba.com> wrote in message
news:afip5c$vtd$1...@lust.ihug.co.nz...
> <snip>

>
> complete ignorance of basic
> backup and recovery techniques.
>
> Are *you* running a 12Tb database?
>

Howard,

Agree completely. The equine entity in question is entirely teetotal.
Keep up the good work.

Regards,
Paul

Richard Foote

unread,
Jun 30, 2002, 8:24:04 PM6/30/02
to
Hi Howard,

Just want to clarify a statement you made at this point in the thread
regarding the fact that "*all* prior backups and archives are totally and
utterly worthless" after you open a database with resetlogs. This is not
entirely correct (since I think Oracle 7.3ish).

Although I would agree with you that you really should take a fresh backup
ASAP after opening the database with resetlogs, there is a process by which
you can still fully recover the database whilst using the previous backup.
Although somewhat messy, it can be done.

1) Shutdown the db (not nice but you're got no choice at this point)
2) Fully backup your existing database (at least the control files but
probably the rest of the db as well. Lets face it, you're having a bad day
here. And don't overwrite your previous backup !)
3). Restore all datafiles and the controlfile from backup (note the back up
prior to the resetlogs).
4) recover database until change = scn at the time of the resetlogs (this
is documented in the alert log) using backup controlfile. You now have
recovered the database to the point when the resetlogs command was issued.
5) Restore just the controlfile from the current state of the database (as
backup in step 1)
6).now fully recover the database. The current controlfile documents the
fact that archive log starting at number 1 (and not say 1001) now needs to
be applied.

You have just fully recovered the database using a backup and redo logs
prior to the resetlogs.

Now this all very messy as I said and requires an incomplete recovery
(meaning the db is down) followed by a complete recovery so definitely a
fresh backup would have meant a far simpler recovery scenario. However if a
fresh complete cold backup means unacceptable availability issues or the
database problem occurred during your attempted hot backup (ie. you are
having one hell of a bad day), then this is at least a way out.

Keep up the good work.

Regards

Richard


"Howard J. Rogers" <d...@hjrdba.com> wrote in message

news:afj9ug$guj$1...@lust.ihug.co.nz...

Howard J. Rogers

unread,
Jul 1, 2002, 5:00:58 AM7/1/02
to

"Richard Foote" <richar...@bigpond.com> wrote in message
news:dcNT8.24722$Hj3....@newsfeeds.bigpond.com...

> Hi Howard,
>
> Just want to clarify a statement you made at this point in the thread
> regarding the fact that "*all* prior backups and archives are totally and
> utterly worthless" after you open a database with resetlogs. This is not
> entirely correct (since I think Oracle 7.3ish).
>

I know, I know. If you search my contributions on this topic in the past at
Google, you will see that I usually add a "without a great deal of effort'
qualifier. 'Recovery Through Resetlogs': "Recovery Despite Resetlogs" is
more accurate. Possible but nasty, as you say.

Regards
HJR

Fernando de Souza Lima

unread,
Aug 2, 2002, 10:15:35 PM8/2/02
to
On Wed, 26 Jun 2002 15:59:18 GMT, "Joe Salmeri"
<JoeSa...@comcast.net> wrote:

>Here is the server configuration:
>
>Dual P3 Dell Server with 1.5 GB RAM
>Redhat Linux 7.2
>Oracle 8i (8.1.7.0.1)
>Two Oracle databases created
>Oracle TNS Listener configured
>
>Everything appears to be working properly with both database instances,
>however we are unable to create a tablespace file that is larger than 2 GB.
>
>I know that Oracle 8i supports tablespace files that are larger than 2 GB
>because I have setup Oracle 8i installations on Windows 2000 and have
>created > 2gb tablespace files.
>
>From the Redhat website I see that there is a version called Redhat
>Enterprise Edition that says it has largefile support for Oracle 8i, but is
>it possible without getting the Enterprise Edition? If so, what needs to be
>done to provide that support?
>
>Thanks in advance.
>
>Joe


Joe,


I guess the problem is some kind of SO limitation. Search the RED HAT
documentation about large files.

I'll search for this information and send you back.


Regards,


Fernando Lima
Commit Consulting
===================================
OCP
CCNA
CNE
CDIA


>
>
>
>

Tom

unread,
Aug 4, 2002, 1:10:13 AM8/4/02
to

>>
>>Everything appears to be working properly with both database instances,
>>however we are unable to create a tablespace file that is larger than 2
>>GB.
>>

What filesystem are you using for RH? By default RH uses ext2 (<= RH7.1)
and ext3 (> RH7.1). ext2 and ext3 have an innate maximum filesize of 2GB.
Support for files > 2GB is planned for the ext2 and ext3 filesystems in the
2.6 Linux kernel. For now either stay at or under 2GB (my choice usually)
or go with a different filesystem type. There are 5 or 6 of them, but I
don't know which have large file support.

Hope this helps...

-Tom


Liggs

unread,
Aug 4, 2002, 2:34:51 PM8/4/02
to
Er... what is wrong with having a tablespace with more than one datafile in
it?
Are you planning on a very large database?

[Files more that 2Gb can cause problems on Intel based Windows BTW (in some
cases!) - see Metalink ]

Liggs


"Fernando de Souza Lima" <fl...@infolink.com.br> wrote in message
news:3d4b3bfa...@veloxnews.infolink.com.br...

0 new messages