Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

why frequent commits are not good?

9 views
Skip to first unread message

NetComrade

unread,
Mar 16, 2001, 11:38:49 AM3/16/01
to
I've read somewhere that frequent commits are not necessarily good for
the database, can't find the doc, does anybody recall why?

also, it appears that a SELECT over a db_link requires a commit after
execution.

our developers are thinking : 'why don't we just commit after every
SELECT executed?' (they think it would be easier to implement.

Can this potentially lead to problems?

Thanx.
---------------
In case I forgot to mention:
We use Oracle 7.3.4 and 8.1.6 on Solaris 2.6, 2.7 boxes
---------------
Andrey Dmitriev eFax: (978) 383-5892 Daytime: (917) 750-3630
AOL: NetComrade ICQ: 11340726 remove NSPAM to email

andrew_webby at hotmail

unread,
Mar 16, 2001, 11:50:34 AM3/16/01
to
I must be missing something - why would you commit after a select?

You only need to commit after an insert/update surely?

I wouldn't know why frequent commits aren't necessarily 'good' for a
database, but it would mean that a long operation would be more difficult to
roll back if need be. If you commit too often, you kind of negate the
features of commit/rollback I suppose...

"NetComrade" <andre...@bookexchange.net> wrote in message
news:3ab2422b....@news.earthlink.net...

Sybrand Bakker

unread,
Mar 16, 2001, 12:53:32 PM3/16/01
to
Actually this is quite simple.
A commit terminates a transaction.
A transaction has administrative overhead (the SCN number, a timestamp, etc,
etc).
So increasing the number of commits increases the administrative overhead,
increases the number of bytes written to redo log files.
You should try to determine what your *logical* transactions are as opposed
to committing as often as possible, because you failed to configure your
rollback segments properly (I don't say you did!).
As for committing after a select over a database link I think you should
provide additional detail why you think you need this.
As for the idea committing every select: this just proofs many developers
are just lazy and lack sufficient brains.

Regards,

Sybrand Bakker, Oracle DBA

"NetComrade" <andre...@bookexchange.net> wrote in message
news:3ab2422b....@news.earthlink.net...

NetComrade

unread,
Mar 16, 2001, 1:52:25 PM3/16/01
to
We commit after every insert or update anyway, since we are running in
a web environment and 'end-user' can stop being active at any second
or open up a different window and potentially start a new oracle
session. So I guess the question should be:

Would it be OK to commit after regular select (will it still encounter
all the overheads, since no data has really been committed)?

Thanx

As for committing after a query going over db_link:

http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=74811.1:
When a select from a remote table is issued a TX lock out is taken
out on a local rollback segment. This lock is only released
by issuing a COMMIT, ROLLBACK or logoff.

For example:

select empno,ename from emp@remotedb;

This is documented behaviour in [BUG:262277].

The explanation given is:

"When we start a transaction which is to be executed remotely, since
at that time we are not sure what is going to happen locally while
the query is being executed remotely, a TX lock is taken out on
a local rollback segment even if the query is only a select."

The reason for this is that a remote select is like a two phase commit

in that it sends out the query over the network and waits for it to
return a successful result. If you check v$transaction during the
transaction, you will see used_ublk show 1 block of rollback being
used
during this update and a TX lock against the rollback segment.

Frank

unread,
Mar 16, 2001, 1:59:02 PM3/16/01
to
> also, it appears that a SELECT over a db_link requires a commit after
> execution.

Hi!

We have used database links a lot(sadly:-), and do not do anything different
with them than for local queries.
One of the grand ideas is that the distribution of the database is going to
be as transparent for the developer
as possible. This means that a developer may not even know that a table is
remote due to synonyms/views disquising it.

In real life, however, you must think about the performance of the
statements.

I can tip you that:
- for stored procedures there are restrictions when operating over database
links (pragma_restrict...)
- you will not be allowed to perform a commit in stored procedure on the
remote database.


Frank

mel

unread,
Mar 16, 2001, 2:22:02 PM3/16/01
to

further frequent commits increase the possibilty of a 'snapshot too old'
error, since the rollback segments would have been written over more
frequently.

also commits are expensive and will increase the time required to complete
inserts or updates. This is even worse for stored procedures,.

In article <tb4lro8...@beta-news.demon.nl>, "Sybrand Bakker"

Sybrand Bakker

unread,
Mar 16, 2001, 2:32:04 PM3/16/01
to
I am aware of that policy of web developers.
It is just plain *stupid*, period.
Imagine a *logical transaction*, which consists of inserting into 2 tables,
with subsequent inserts.
Every insert is committed separately.
How would you call your database when the user aborts after the first
insert? *INCONSISTENT!!!*
The *complete* transaction should have been *rolled back*, not only the last
insert!

Regards,
Sybrand Bakker, Oracle DBA

"NetComrade" <andre...@bookexchange.net> wrote in message

news:3ab260df....@news.earthlink.net...

Roxl

unread,
Mar 17, 2001, 2:53:03 PM3/17/01
to

"NetComrade" <andre...@bookexchange.net> wrote in message
news:3ab2422b....@news.earthlink.net...
> I've read somewhere that frequent commits are not necessarily good for
> the database, can't find the doc, does anybody recall why?
>
> also, it appears that a SELECT over a db_link requires a commit after
> execution.
>
> our developers are thinking : 'why don't we just commit after every
> SELECT executed?' (they think it would be easier to implement.
>
> Can this potentially lead to problems?
>
> Thanx.

A COMMIT statement is telling Oracle that information on the change needs to
be written to the log file (ie disk) right away, so by committing more than
you need to - you're potentially making Oracle more I/O bound. In my
opinion though - the important factor here is that the application logic
handling transactions is correct and failure safe. ie if the app. reaches a
point where it should logically commit then it should definitely do so.

(But you may wish to reduce the commit interval for batch runs which occur
during maintenance windows....)

Cheers,

Richard Gowan

steve

unread,
Mar 16, 2001, 8:33:52 PM3/16/01
to

hopefully its a select for update?

Jonathan Lewis

unread,
Mar 17, 2001, 7:53:43 AM3/17/01
to
The point about frequent commits being bad is
usually made with regard to batch updates.

PL/SQL Code which updates one row at a time
and commits is much less efficient than code
which updates all the target rows then commits.
On the other hand, if the big update fails for any
reason, rollback (hence restart) times can be
huge. For this type of operation, it is common
therefore to update in reasonable sized batches
and commit regularly but not extremely frequently.
Obviously, as others indicate, this requires some
thought about the logic of incomplete batch
updates.


Your other point about distributed SELECTs
requiring a commit, and web applications has
some elements that need consideration.

Distributed SELECTS do start a local transaction.
If you NEVER commit, and stay connected
indefinitely, you will stop your rollback segment
from being recycled - clearly, therefore, your session
must commit or rollback at some point to avoid the
side-effects on rollback segment sizes.

In these 'no action' distributed transactions, the rollback
header and a rollback block are updated, so some redo
is generated, so the commit or rollback does do some
work. (A commit/rollback normally does nothing
if there is nothing to commit or rollback). A rollback
_appears_ to be slightly cheaper than a commit in
these circumstances - but in operational terms,
due to redo wastage, I suspect that there is no difference
in performance.

However, committing after every select is an undesirable
overhead - the cost of starting a new transaction is not trivial.
It is interesting to note that Oracle's web application
server always finishes a dialogue with 'reset package
states' and 'rollback'. If your web app is connecting
and disconnecting all the time, then it will not scale -
so you should probably emulate Oracle's approach,
with a near-permanent connection that 'resets' itself
after each dialogue. This should supply the single 'rollback'
you need to clear the local transaction. Since Web apps tend
to be short snappy dialogues, this will probably be
sufficient to avoid any side effects on size of rollbacks.


--
Jonathan Lewis
Yet another Oracle-related web site: http://www.jlcomp.demon.co.uk

Practical Oracle 8i: Building Efficient Databases
Publishers: Addison-Wesley

Reviews at: http://www.jlcomp.demon.co.uk/book_rev.html

NetComrade wrote in message <3ab2422b....@news.earthlink.net>...

Frank

unread,
Mar 17, 2001, 9:28:48 AM3/17/01
to
> Distributed SELECTS do start a local transaction.
> If you NEVER commit, and stay connected
> indefinitely, you will stop your rollback segment
> from being .....

Hi!

I dont speak these lock-flags fluidly, so I wonder; what exactly is locked
(see
NetComrade's posting) when doing this query?

select empno,ename from emp@remotedb;

And does this raises a lock locally so that no other session on the local
datababase
can query the same remote table until commit/rollback?

Frank


Nuno Souto

unread,
Mar 17, 2001, 10:11:57 AM3/17/01
to
On Sat, 17 Mar 2001 15:28:48 +0100, "Frank" <fra...@frisurf.no>
wrote:

>I dont speak these lock-flags fluidly, so I wonder; what exactly is locked
>(see
>NetComrade's posting) when doing this query?
>
>select empno,ename from emp@remotedb;
>
>And does this raises a lock locally so that no other session on the local
>datababase
>can query the same remote table until commit/rollback?
>

Rule #1 when dealing with ORACLE:
A SELECT without the FOR UPDATE clause will *NEVER* lock a row, table,
database, other query sessions, whatever.

Cheers
Nuno Souto
nso...@bigpond.net.au.nospam
http://www.users.bigpond.net.au/the_Den/index.html

Jonathan Lewis

unread,
Mar 17, 2001, 11:33:12 AM3/17/01
to

No data object is locked.

If you 'select * from v$lock' you will find that
the local database has taken out a TX lock
in mode 6, and a DX lock in mode 4.

The requirement for a commit or rollback
is nothing to do with the 'select' per se,
and releasing data for other processes,
it is purely to release the transaction entry
and tiny bit of space used in the rollback
segment.


--
Jonathan Lewis
Yet another Oracle-related web site: http://www.jlcomp.demon.co.uk

Practical Oracle 8i: Building Efficient Databases
Publishers: Addison-Wesley

Reviews at: http://www.jlcomp.demon.co.uk/book_rev.html

Frank wrote in message ...

Liz Reen

unread,
Mar 20, 2001, 4:55:15 PM3/20/01
to
In article <tb4tjht...@beta-news.demon.nl>, pos...@sybrandb.demon.nl
says...

> I am aware of that policy of web developers.
> It is just plain *stupid*, period.
> Imagine a *logical transaction*, which consists of inserting into 2 tables,
> with subsequent inserts.
> Every insert is committed separately.
> How would you call your database when the user aborts after the first
> insert? *INCONSISTENT!!!*
> The *complete* transaction should have been *rolled back*, not only the last
> insert!
>
> Regards,
> Sybrand Bakker, Oracle DBA
> > >Sybrand Bakker, Oracle DBA
> > >

I have got to agree with Sybrand. I just finished clearing out a million
records which didn't have the second insert committed. These records
were a major drag on the database. Does it really make sense to record
who you are sending stuff to, if you don't record what they want?

Liz

0 new messages