Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Informix vs. Sybase vs. Oracle vs. (gasp) MS SQL Server

51 views
Skip to first unread message

Chris Jenkins

unread,
Nov 21, 1997, 3:00:00 AM11/21/97
to

Hi CDI peeps,

Pablo Sanchez wrote:

> In article <3357D8...@informix.com>, Dan Crowley <dcro...@informix.com> writes:

> I don't agree with the above justification for 1 gig data cache. On a machine with 4
> gig memory, we only get to use 25% of it?

Well, okay the buffer cache is limited to 1 Gb in 32-bit ports. But we can use some
ofthe extra memory for virtual segment (thread stacks, sort space, dictionary/stored
procedure
cache etc. etc.) so it's not quite the limitation you make out. Also, the 64-bit ports
don't
have this limitation anyway.

> > >
> > > o variable I/O read/writes: o based on platform, only 2/4K read/writes -
> > > 2/4/8/16K although on the log there are
> > > group writes
>
> > Informix does 16K I/O when doing light scans. Also, checkpoint writes
> > are sorted for better performance
>
> Great! A definitive answer. Thx.

In fact, the I/O size for light scans is not fixed but is calculated dynamically for
each scan.
Also, checkpoint writes are done in units of 8 buffers which makes them 16k (2k
pagesize) or 32k (4k page size) writes. Also for read ahead and other sequential I/O we
will group I/O for contiguous pages and do a larger physical read if possible.

> > (keeps disk heads from randomly
> > seeking - I'm not sure if Sybase does this).
>
> It doesn't. This is a really neat feature of Informix, simple and
> effective.
>
> > Finally, are there any controllers that make use of I/O > 16k?

> snip...

> > However, with Sybase you MUST log. You have no options (less knobs if
> > you will).
> > With Informix, for each database you can choose to do
> > unbuffered, buffered, or no logging. No logging is great for Data
> > Warehouse situations where you load once a day, week, or month, and the
> > rest of the time you're only doing reads.
>
> You raise some good points here. This is a very nice feature of
> Informix. I think that "no logging" is a bit of a misnomer in that
> the RDBMS must do some logging in order to conform to ACID.

An unlogged database in Informix does not support transactions and does not log changes
todata. Certain DDL and internal operations are logged however.

> >
> > Another thing to mention is the parallel use of threads - particularly
> > useful for data warehousing, parallel index builds, or any time you want
> > to reduce the time to execute a time consuming SQL statement.
> >
>
> Most definitely. Although I find it a bit strange that the
> connections must specify the percentage to allocate. Sounds
> klunky... however the feature is great. Sybase 11.5 will
> have... but that's future...

You don't need to specify the percentage for parallel index builds. It isessential for
heavy DSS queries to have some resource regulation to
prevent (if desired) any single query hogging the entire machine (unless
that is what you want of course). You have a choice to prioritise
indicidual queries.

> > o Only Page Level Locking o Page or Row Level Locking
> >
> > Advantage Informix. This is a huge weakness with Sybase.
>
> No, this is a great marketing whitewash. I won't get into this
> religious war.

Sorry but it is a *big* limitation. Row level locking provides much greaterconcurrency,
even for a well designed application database.

> This is true, this is nice defensive RDBMS for a bad application. A
> good application will increase throughput by reducing latency.
> Reducing latency is accomplished by decreasing the duration of locks
> and internal processing. This is why the row level versus page
> level religious war is silly. Vendors try to play this up like it's
> some great advantage. But it's not. Informix allows the user the
> *option* to have row level locking. If it was such a great thing
> with no overhead, why make it an option and not have it built in?
> Hmmmm....

The only reason that the default is page level locking is historical. It usedto be that
way so we haven't changed the default to maintain backwards
compatibility. We always encourage use of row level locking these days.

Pablo Sanchez

unread,
Nov 21, 1997, 3:00:00 AM11/21/97
to

[ this message is so old ... ]

>>>>> "Chris" == Chris Jenkins <chr...@informix.com> writes:
Chris>
Chris>
Chris> Sorry but it is a *big* limitation. Row level locking provides much greaterconcurrency,
Chris> even for a well designed application database.
Chris>

Think again. Look at the current -C values for Sybase and
look at what Informix is generating. Sounds like you fell
for that one too...

Chris> The only reason that the default is page level locking is historical. It usedto be that
Chris> way so we haven't changed the default to maintain backwards
Chris> compatibility. We always encourage use of row level locking these days.
Chris>

I don't think so... common sense dictates that there be
overhead via your lock manager. More locks, more overhead.
Anyway, this is a futile discussion because it's been hashed
out over and over and over and over again...
--
Pablo Sanchez | Ph # (650) 933.3812 Fax # (650) 933.2821
pa...@sgi.com | Pg # (800) 930.5635 -or- pab...@pager.sgi.com
-------------------------------------------------------------------------------
I am accountable for my actions. http://reality.sgi.com/pablo [ /Sybase_FAQ ]

Perry Dillard

unread,
Nov 24, 1997, 3:00:00 AM11/24/97
to

Obviously Pablo has never written an OLTP Order Entry system where
multiple users are constantly updating an inventory table and
committing quantites to customers on the fly. If he had he would know
that row level locking is absolutely essential and that page level
will not let you see the realtime up to the instant available
quantities for a given inventory item.

I'll admit that page level locking has its place, but row level is
absolutely essential. And I don't believe anyone has tried to argue
that row level locking carries no overhead. Of course it has more
overhead than page locking. But it mostly depends on how many rows
are actually being locked. If you keep one row locked in a header
table during an update transaction, you aren't incurring a significant
amount of overhead compared to a page lock. And you get the added
benefit that someone else can be modifying the record right next to
you since you don't have a whole page of data locked. This is quite
critical when you have a multi-user application.

-perry

Pablo Sanchez

unread,
Nov 24, 1997, 3:00:00 AM11/24/97
to

>>>>> "Perry" == Perry Dillard <perry....@iname.com> writes:
Perry>
Perry> [ The old row level versus page level crud deleted ]

Perry,

All I'm saying is that the hype you have fallen for is
really not there. Most problems with locking are centered
around the application not the RDBMS. If you have a finely
tuned application in (gasp!) any RDBMS, the application will
do well. Take a look at the TPC-C's (which is a
tuned application) and you'll see that the current numbers
for Informix are something like 24K and Sybase it's
something like 34K. Sybase is doing page level locking and
Informix is not (in certain areas).

I'm certain that a new -C value for Informix will come out
topping even Oracle's 50-odd K but that's not the point.
The point is:

1) Rarely is there an application that requires as much
throughput as a -C
2) Row level versus page level locking is all marketing hype
crap. If you want to believe, go for it... but look for
the problems in your application, not at the RDBMS

Ultimately, people believe what they want to believe... this
argument is so old... yawn.

ps: I've written quite a few app's and have DBA'd for
several years.

pps: I do like Oracle's Server Process scheme. I can bet
one can push the disks really hard with all them
readers.

Cosmo Lee

unread,
Nov 25, 1997, 3:00:00 AM11/25/97
to


Pablo Sanchez wrote:

> 2) Row level versus page level locking is all marketing hype

> crap. If you want to believe, go for it... but look for
> the problems in your application, not at the RDBMS
>
> Ultimately, people believe what they want to believe... this
> argument is so old... yawn.
>

You're really are quite wrong on this issue. There are applications where anything
other than row level locking simply won't do. It's not all hype. The fact that
you are making such a broad sweeping general statement should clue you in to to the
fallacy of your own position.

--
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
ADDRESS ALTERED TO FOIL SPAMMERS: Remove "*NO-SPAM*" to reply.

Cosmo Lee Multi-User Computer Systems Brooklyn, NY

"JUST SAY 'NO' TO SPAM"
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

Anthony Mandic

unread,
Nov 25, 1997, 3:00:00 AM11/25/97
to

Perry Dillard wrote:

> Obviously Pablo has never written an OLTP Order Entry system where
> multiple users are constantly updating an inventory table and
> committing quantites to customers on the fly. If he had he would know
> that row level locking is absolutely essential and that page level
> will not let you see the realtime up to the instant available
> quantities for a given inventory item.

Nonsense. I've worked on the development of something similar
with Sybase. Regardless of whether the locking is row level
or page, you won't ordinarily see the realtime up to the
instant available current data until an open transaction
has been committed. How can you see what doesn't yet
exist? Row level locking doesn't change this fact.

> I'll admit that page level locking has its place, but row level is
> absolutely essential.

Since when? The level of locking should be transparent to
well written applications. It only becomes an issue with
poorly written ones.

> And I don't believe anyone has tried to argue
> that row level locking carries no overhead. Of course it has more
> overhead than page locking. But it mostly depends on how many rows
> are actually being locked. If you keep one row locked in a header
> table during an update transaction, you aren't incurring a significant
> amount of overhead compared to a page lock. And you get the added
> benefit that someone else can be modifying the record right next to
> you since you don't have a whole page of data locked. This is quite
> critical when you have a multi-user application.

One lock is one lock. Its not critical in a multi-user database
unless the application being used is badly written.

-am

Anthony Mandic

unread,
Nov 25, 1997, 3:00:00 AM11/25/97
to

Cosmo Lee wrote:
>
> Pablo Sanchez wrote:
>
> > 2) Row level versus page level locking is all marketing hype
>
> > crap. If you want to believe, go for it... but look for
> > the problems in your application, not at the RDBMS
> >
> > Ultimately, people believe what they want to believe... this
> > argument is so old... yawn.
> >
>
> You're really are quite wrong on this issue. There are applications where anything
> other than row level locking simply won't do. It's not all hype.

Since when? Whenever I've come across third party applications that
depend on row level locking and have been ported to Sybase, the
inevitable
problems that ensue are simply because the applications are so badly
designed
to be laughable. I'm struggling with one such in particular at the
moment
called Aurion. But it might as well be SmartStream, SAP or PeopleSoft
or
whatever. They claim they need row level locking because they all
stupidly
lock selected data for the duration of a transaction that may or may
not
utilmately do an update or insert. The changes to correct this sort of
nonsense are trivial. Why they won't do the change is obvious. These
programs cater to the lowest common denominator in database technology.
Thus they don't have a real grasp of proper client/server methodology.

> The fact that
> you are making such a broad sweeping general statement should clue you in to to the
> fallacy of your own position.

There's no fallacy in Pablo's position. More correctly, you are
suffering from a phallusy bacause you can't see the wood for the
trees.

-am

Pablo Sanchez

unread,
Nov 25, 1997, 3:00:00 AM11/25/97
to

>>>>> "Cosmo" == Cosmo Lee <"* N O <Cosmo> writes:
Cosmo>
Cosmo> There are applications where anything other than row
Cosmo> level locking simply won't do.

Sorry, you're wrong here...

Cosmo> The fact that you are making such a broad sweeping
Cosmo> general statement should clue you in to to the
Cosmo> fallacy of your own position.

The proof is in the pudding (as they say), call my bluff.
Rather than venting hot air that "you're wrong and I'm
right" call my bluff (if you think so).

Show me The Application and I'll redesign it. I'll need to know:

1) The ER diagram and
2) The business problem that we're solving

There are *many* ways to solve an application so I'd love to
see The Application (you claim) that:

There are applications where anything other than row
level locking simply won't do.

Paul Brown

unread,
Nov 25, 1997, 3:00:00 AM11/25/97
to

Anthony Mandic (no_s...@agd.nsw.gov.au) wrote:

: Cosmo Lee wrote:
: >
: > Pablo Sanchez wrote:
: >
: > > 2) Row level versus page level locking is all marketing hype
: >
: > > crap. If you want to believe, go for it... but look for
: > > the problems in your application, not at the RDBMS

My $0.02.

To quote from Jim Gray, (_Transaction_Processing_:_Concepts_and_
Techniques_ pp. 420-21);

"Many people have learned the following adage the hard way: <i>
page granularity [of locks] is fine, unless you need finer
granularity.</i> The virtue of page-granularity is that it is
very easu to implement [for the DBMS vendor], it works well in
most cases, and it can give phantom protection. . . But page-locking
systems have serious problems with hotspots: if all the popular
records fit onto a single page, then page-granularity locking can
create hotspots." (italics his)

He then goes on to list a series of examples of hot-spots.

I think the point is that for 95% of the problems out there,
page-level granularity is fine. However, there are cases where
row-level granularity is useful, even important. Further, designing
effective and efficient schemas to support page locking is a task
beyond a lot of developers. Row-locks are a cheap and effective
answer to their problems.

KR

Pb

Snorri Bergmann

unread,
Nov 25, 1997, 3:00:00 AM11/25/97
to

Pablo Sanchez wrote:

[Stuff deleted]

> 1) The ER diagram and
> 2) The business problem that we're solving
>
> There are *many* ways to solve an application so I'd love to
> see The Application (you claim) that:
>
> There are applications where anything other than row
> level locking simply won't do.

Ok. Here is an easy one. Only involves 2 tables (master and detail).
This is of course very simlified and I have not checked the syntax :-)

CREATE TABLE master (id INT PRIMARY KEY,
name VARCHAR(31),
address VARCHAR(21),
amount DECIMAL(16,4) );

CREATE INDEX x1 ON master (name);
CREATE INDEX x2 ON master (address);


CREATE TABLE detail (id INT REFERENCES master,
line_no INT,
item_no INT,
descr VARCHAR(100),
cost DECIMAL(10,4));

ALTER TABLE detail PRIMARY KEY(id, line_no);

Now what the app does is it allows users to select a record from the
master table, get all child records from the detail table and allow the
users to add/delete/modify those records as needed. After the user exits
it updates the
master table by selecting the sum of cost from the detail table. The sql
statements involved are something like this: (assume everything is under
transaction control)

-- We need to lock the master table so nobody else will update it while
we are.

SELECT * FROM master WHERE id = ? FOR UPDATE;

-- No need to lock the detail recs 'cause the master rec is locked.

SELECT * FROM detail WHERE id = ? ORDER BY line_no;

-- After the user exits we update all detail recs, probably by deleting
them first, and then inserting whatever the user has typed in.

-- Then we'll need to update the master table and release the lock.

UPDATE master SET amount = (SELECT sum(cost) FROM detail WHERE id = ?)
WHERE id = ?;

COMMIT;

If you have row level locking, the first select stmt will lock one row.
As the primary key is 4 bytes you can potentially put 4-500 keys on any
page (if you have 2K pages). In a page lock system (like Sybase and
MSSQL) the probability of a user trying to access one of the ~499 key
values another user has locked is pretty high.

Page lock workarounds could be:

1) Make the primary key >1Kb, so you would only have one key per page.
(Disks are cheap, right?)

2) Use optimistic locking. (So users who have been typing in for hours
get the message: Somebody else has modified this rec while you were
working. Please try again).

3) Swich to Informix :-)

Take care,
-Snorri

--
Snorri Bergmann | Mail: sno...@strengur.is
Strengur Consulting Engineers | WWW: http://www.strengur.is/
Armuli 7 | Phone: +354 550 9000 (9007 direct)
108 Reykjavik Iceland | Telefax: +354 550 9010

Joel Garry

unread,
Nov 25, 1997, 3:00:00 AM11/25/97
to

In article <347AA2...@agd.nsw.gov.au>,

Anthony Mandic <no_s...@agd.nsw.gov.au> wrote:
>Cosmo Lee wrote:
>>
>> Pablo Sanchez wrote:
>>
>> > 2) Row level versus page level locking is all marketing hype
>>
>> > crap. If you want to believe, go for it... but look for
>> > the problems in your application, not at the RDBMS
>> >
>> > Ultimately, people believe what they want to believe... this
>> > argument is so old... yawn.
>> >
>>
>> You're really are quite wrong on this issue. There are applications where anything
>> other than row level locking simply won't do. It's not all hype.
>
> Since when? Whenever I've come across third party applications that
> depend on row level locking and have been ported to Sybase, the
>inevitable
> problems that ensue are simply because the applications are so badly
>designed
> to be laughable. I'm struggling with one such in particular at the
>moment
> called Aurion. But it might as well be SmartStream, SAP or PeopleSoft
>or
> whatever. They claim they need row level locking because they all
>stupidly
> lock selected data for the duration of a transaction that may or may
>not
> utilmately do an update or insert. The changes to correct this sort of
> nonsense are trivial. Why they won't do the change is obvious. These
> programs cater to the lowest common denominator in database technology.
> Thus they don't have a real grasp of proper client/server methodology.

I agree with the lcd problem, but the changes are not trivial. Can you say
Repeatable Read? It's like changing the newsgroup comp.databases.oracle to
a hierarchy - the change is trivial, but just try to implement it completely.
Coming up on a year, and people still post here.

>
>> The fact that
>> you are making such a broad sweeping general statement should clue you in to to the


>> fallacy of your own position.
>

> There's no fallacy in Pablo's position. More correctly, you are
> suffering from a phallusy bacause you can't see the wood for the
> trees.
>
>-am


--
These opinions are my own and not necessarily those of Information Quest
jga...@eiq.com http://www.informationquest.com
http://ourworld.compuserve.com/homepages/joel_garry
"See your DBA?" I AM the @#%*& DBA!

Joel Garry

unread,
Nov 25, 1997, 3:00:00 AM11/25/97
to

In article <347AA0...@agd.nsw.gov.au>,

Anthony Mandic <no_s...@agd.nsw.gov.au> wrote:
>Perry Dillard wrote:
>
>> Obviously Pablo has never written an OLTP Order Entry system where
>> multiple users are constantly updating an inventory table and
>> committing quantites to customers on the fly. If he had he would know
>> that row level locking is absolutely essential and that page level
>> will not let you see the realtime up to the instant available
>> quantities for a given inventory item.
>
> Nonsense. I've worked on the development of something similar
> with Sybase. Regardless of whether the locking is row level
> or page, you won't ordinarily see the realtime up to the
> instant available current data until an open transaction
> has been committed. How can you see what doesn't yet
> exist? Row level locking doesn't change this fact.
>
>> I'll admit that page level locking has its place, but row level is
>> absolutely essential.
>
> Since when? The level of locking should be transparent to
> well written applications. It only becomes an issue with
> poorly written ones.

No matter how well written the application is, if the page is locked by
someone accessing a different row that happens to be on the page where the
row you want is, you won't get it. Telling people to make their records
bigger so there is only one on a page is a pretty hard sell. Convincing
programmers to write correctly is like herding cats. Good luck.

>
>> And I don't believe anyone has tried to argue
>> that row level locking carries no overhead. Of course it has more
>> overhead than page locking. But it mostly depends on how many rows
>> are actually being locked. If you keep one row locked in a header
>> table during an update transaction, you aren't incurring a significant
>> amount of overhead compared to a page lock. And you get the added
>> benefit that someone else can be modifying the record right next to
>> you since you don't have a whole page of data locked. This is quite
>> critical when you have a multi-user application.
>
> One lock is one lock. Its not critical in a multi-user database
> unless the application being used is badly written.
>

Matt

unread,
Nov 25, 1997, 3:00:00 AM11/25/97
to

I am new to this issue, coming from an x-base environment, where
row-level locking is standard. There seems to be quite a debate on
row-vs-page level locking, and admittedly I am a bit confused.

I have an application accessing several tables, but one of them is
key. Currently, when a user is accessing a particular dataset, the
current record in this table is locked, and no other user can access
this record until the first user is done (perhaps poor practice in the
SQL world, but it keep our users from stepping on each other.)

In a page level locking scheme, this may cause problems, given that
two users are trying to access 2 adjacent records, contained in the
same page, which WILL occur often.

Can you explain how I can overcome this issue in a page-locking
engine?


Chris Curvey

unread,
Nov 25, 1997, 3:00:00 AM11/25/97
to

That depends. If you have a small number of master records,
then the probability of a lock collision is high, but then you *can*
extend
the key to be 1K, since you don't need that much disk. If you have
a large number of master records, then you can control the probability
of a lock collision via a clustered index.

> Page lock workarounds could be:
>
> 1) Make the primary key >1Kb, so you would only have one key per page.
> (Disks are cheap, right?)
>
> 2) Use optimistic locking. (So users who have been typing in for hours
> get the message: Somebody else has modified this rec while you were
> working. Please try again).

Okay, but under row level locking in this case, nobody would be able to
view the records in question while the other user spent hours and hours
typing.

Greg

unread,
Nov 25, 1997, 3:00:00 AM11/25/97
to

Chris Curvey wrote:
>
> Snorri Bergmann wrote:
> >
snip.....

Well gang, let me approach this from a different angle.

Yes, with enough finagling, sharp application programmers (is this an
oxymoron?...he says...ducking for cover...) can reduce locking
contention within pages, perhaps to the point of tolerability in a large
OLTP system.

It's an expensive solution to a modest problem.

Row level is much cheaper and easier. That's why it gets done.

Same reason practically noone writes assembler anymore. Works great if
you futz with it enough. No one can afford to futz with it enough
though. It's just not economic to do so. Unfortunately for Sybase,
they're learning this the hard way.

Greg

Anthony Mandic

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Snorri Bergmann wrote:

> Ok. Here is an easy one. Only involves 2 tables (master and detail).
> This is of course very simlified and I have not checked the syntax :-)

[DDL definitions snipped]

> Now what the app does is it allows users to select a record from the
> master table, get all child records from the detail table and allow the
> users to add/delete/modify those records as needed. After the user exits
> it updates the
> master table by selecting the sum of cost from the detail table. The sql
> statements involved are something like this: (assume everything is under
> transaction control)
>
> -- We need to lock the master table so nobody else will update it while
> we are.

Here is the crux of your problem. "We need to lock".
Its this mindset that doesn't allow you to see a better
solution, so you let your DB server do it for you.

> SELECT * FROM master WHERE id = ? FOR UPDATE;
>
> -- No need to lock the detail recs 'cause the master rec is locked.

Yes, no need to lock. Its how you lock master thats the
problem. I'll explain below.

> SELECT * FROM detail WHERE id = ? ORDER BY line_no;
>
> -- After the user exits we update all detail recs, probably by deleting
> them first, and then inserting whatever the user has typed in.

What an overhead, delete followed by insert. An in situ
update is far more efficient. Use it wherever possible.

> -- Then we'll need to update the master table and release the lock.

You can use a non-server generated lock instead. Then it
just becomes a matter of updating the master table. The
action of this update is also the action of releasing the
lock. No other process is locked out of reading the record
but others can't update or delete while that record is marked
as being in use (provided that they observe the rules).

> If you have row level locking, the first select stmt will lock one row.
> As the primary key is 4 bytes you can potentially put 4-500 keys on any
> page (if you have 2K pages). In a page lock system (like Sybase and
> MSSQL) the probability of a user trying to access one of the ~499 key
> values another user has locked is pretty high.

Thats the reason why server-generated locks should be fast,
and not held for an indefinite period. the initial select
should NOT be part of the transaction. This rule should
apply to all locking methods, including row level.

> Page lock workarounds could be:
>
> 1) Make the primary key >1Kb, so you would only have one key per page.
> (Disks are cheap, right?)

Wrong approach. This is the BFI method favoured by those who
just can't grasp the concepts.

> 2) Use optimistic locking. (So users who have been typing in for hours
> get the message: Somebody else has modified this rec while you were
> working. Please try again).

Same wrong approach.


>
> 3) Swich to Informix :-)

I feel tempted to state the same here, but I'd rather ask why
some of Informix's top programmers have jumped ship and joined
Orable? (According to what I've read recently in the trade
papers)

-am

Anthony Mandic

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Joel Garry wrote:

> Anthony Mandic <no_s...@agd.nsw.gov.au> wrote:
> >Perry Dillard wrote:
> >
> >> I'll admit that page level locking has its place, but row level is
> >> absolutely essential.
> >
> > Since when? The level of locking should be transparent to
> > well written applications. It only becomes an issue with
> > poorly written ones.
>
> No matter how well written the application is, if the page is locked by
> someone accessing a different row that happens to be on the page where the
> row you want is, you won't get it. Telling people to make their records
> bigger so there is only one on a page is a pretty hard sell. Convincing
> programmers to write correctly is like herding cats. Good luck.

Its the responsibility of the programmers regardless. If they
were inept and incompetent they'd still produce crap. And end
users would be quick to notice. Waiting for a lock is the same
regardless of granularity. Thats why locks should be fast and
only within the scope of the real transaction. Putting it
around a select that may or may not eventually create a
transaction is just plain dumb.

-am

Anthony Mandic

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Joel Garry wrote:

> Anthony Mandic <no_s...@agd.nsw.gov.au> wrote:
>
> > They claim they need row level locking because they all stupidly

> > lock selected data for the duration of a transaction that may or may
> > not utilmately do an update or insert. The changes to correct this sort


> > of nonsense are trivial. Why they won't do the change is obvious. These
> > programs cater to the lowest common denominator in database technology.
> > Thus they don't have a real grasp of proper client/server methodology.
>
> I agree with the lcd problem, but the changes are not trivial. Can you say
> Repeatable Read? It's like changing the newsgroup comp.databases.oracle to
> a hierarchy - the change is trivial, but just try to implement it completely.
> Coming up on a year, and people still post here.

This depends on where you're reading and posting from. :-)
The solution that works for all cases (the lcd) is quite simple.
The change may or may not be trivial depending on the complexity
of the product. By complexity, I don't mean the schema design
but the design of the programs that work with it. If they
were well designed then the changes should be trivial.

-am

Anthony Mandic

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Matt wrote:

Firstly, don't create a lock on the initial select. Its
a poor practice (especially when trying to port apps from
row to page locking databases). There are a few alternates
you can use instead. Two of the simplest are:

1) Use a field in a key table to act as a marker for a
user's intentions. The field can default to null, zero
or whatever when its not being accessed or its only
being read. When its accessed with the intention of
a later transactional change, flag the field with
an identifying value. Avoid boolean values. Use
something like a user's identity value. This way
others can run a query to see who's accessing this
data. Only produce a lock(s) when the actual updates/
inserts happen, and only on the appropraite tables.
Reset this field back to its default value when done.
All applications that access the same data must honour
the set flag.

Drawbacks of this method under Sybase - apps that ignore
or don't honour the set flag may cause data inconsistency.
You shouldn't have an index on this field (for performance
reasons). You'd need to manually clear the field if an
app crashes and check the related data (should be OK
if any data manipulation was done solely within the
scope of one transaction). You can improve granularity
of this method by changing the field's value just
prior to any manipulation, so you can check at what
point things were at when an error occured.

2) Run a "lock" table. By this I mean a seperate table
that marks that a user is processing a specific object.
Record the key of the object here, the user's id, the date
and time and anything else you feel is appropriate. This
way you can tell who's accesing what and for how long.
All apps have to access objects via this lock table
(unless they are only reading objects to produce summaries
etc.). This has an advantage that in the unlikely event
of a server crash, you can easily scan this table to
check what was being processed (model 1 is more difficult
if the key table is very large).

Drawbacks: you could manually remove one of these 'locks'.
You'll need to tune this 'lock' table under Sybase in a
highly contentious environment.

-am

Snorri Bergmann

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Chris Curvey wrote:
[Snipped]


> > 2) Use optimistic locking. (So users who have been typing in for hours
> > get the message: Somebody else has modified this rec while you were
> > working. Please try again).
>

> Okay, but under row level locking in this case, nobody would be able to
> view the records in question while the other user spent hours and hours
> typing.
>

This is not correct. In Informix everyone can read the record from the
master table, bur no-one can select it for update. It's only locked
exclusively between UPDATE and COMMIT.

Jim Smith

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Anthony Mandic wrote:
>
> Perry Dillard wrote:
>

> > I'll admit that page level locking has its place, but row level is
> > absolutely essential.

I wouldn't go as far as essential but it certainly helps. I would guess that most applications do not have serious
concurrency problems caused by locking, but for those that do the fact that a single row update locks one row as against many
rows must help.

I can't speak about Informix, but the Sybase locking model where readers block writers is probably more of a constraint
(compared to Oracle ) than the page/row difference.


>
> Since when? The level of locking should be transparent to
> well written applications. It only becomes an issue with
> poorly written ones.

This is the techie answer which frequently comes up with Sybase. An alternative reading is that you have to code round
limitations. You have to do that with any system of course (except for my new application development tool Magic(tm)), but it
seems to come up more often with Sybase.

Either way, if you always have to employ top-flight programmers, then that increases the cost of ownership.

Jim Smith

Snorri Bergmann

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Anthony Mandic wrote:
[Snip]

> Here is the crux of your problem. "We need to lock".
> Its this mindset that doesn't allow you to see a better
> solution, so you let your DB server do it for you.
>
> > SELECT * FROM master WHERE id = ? FOR UPDATE;
> >
> > -- No need to lock the detail recs 'cause the master rec is locked.
>
> Yes, no need to lock. Its how you lock master thats the
> problem. I'll explain below.

And:


> You can use a non-server generated lock instead. Then it
> just becomes a matter of updating the master table. The
> action of this update is also the action of releasing the
> lock. No other process is locked out of reading the record
> but others can't update or delete while that record is marked
> as being in use (provided that they observe the rules).

I have never heard before of "non-server generated locks". Could you
explain to me what they are and how they work?

Derek Meyer

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Sybase has plans to introduce row level locking in version 12
because applications actually do need it.

--
Derek Meyer Professional Services Ltd
55 Oakwood Road
London NW11 6RJ
derek...@compuserve.com

Tim Schaefer

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to Anthony Mandic

Anthony Mandic wrote:
>
> Snorri Bergmann wrote:
>
> > Ok. Here is an easy one. Only involves 2 tables (master and detail).
> > This is of course very simlified and I have not checked the syntax :-)
>
> [DDL definitions snipped]
>
> > Now what the app does is it allows users to select a record from the
> > master table, get all child records from the detail table and allow the
> > users to add/delete/modify those records as needed. After the user exits
> > it updates the
> > master table by selecting the sum of cost from the detail table. The sql
> > statements involved are something like this: (assume everything is under
> > transaction control)
> >
> > -- We need to lock the master table so nobody else will update it while
> > we are.
>
> Here is the crux of your problem. "We need to lock".
> Its this mindset that doesn't allow you to see a better
> solution, so you let your DB server do it for you.
>
> > SELECT * FROM master WHERE id = ? FOR UPDATE;
> >
> > -- No need to lock the detail recs 'cause the master rec is locked.
>
> Yes, no need to lock. Its how you lock master thats the
> problem. I'll explain below.
>
> > SELECT * FROM detail WHERE id = ? ORDER BY line_no;
> >
> > -- After the user exits we update all detail recs, probably by deleting
> > them first, and then inserting whatever the user has typed in.
>
> What an overhead, delete followed by insert. An in situ
> update is far more efficient. Use it wherever possible.
>
> > -- Then we'll need to update the master table and release the lock.
>
> You can use a non-server generated lock instead. Then it
> just becomes a matter of updating the master table. The
> action of this update is also the action of releasing the
> lock. No other process is locked out of reading the record
> but others can't update or delete while that record is marked
> as being in use (provided that they observe the rules).
>
> > If you have row level locking, the first select stmt will lock one row.
> > As the primary key is 4 bytes you can potentially put 4-500 keys on any
> > page (if you have 2K pages). In a page lock system (like Sybase and
> > MSSQL) the probability of a user trying to access one of the ~499 key
> > values another user has locked is pretty high.
>
> Thats the reason why server-generated locks should be fast,
> and not held for an indefinite period. the initial select
> should NOT be part of the transaction. This rule should
> apply to all locking methods, including row level.
>
> > Page lock workarounds could be:
> >
> > 1) Make the primary key >1Kb, so you would only have one key per page.
> > (Disks are cheap, right?)
>
> Wrong approach. This is the BFI method favoured by those who
> just can't grasp the concepts.
>
> > 2) Use optimistic locking. (So users who have been typing in for hours
> > get the message: Somebody else has modified this rec while you were
> > working. Please try again).
>
> Same wrong approach.
> >
> > 3) Swich to Informix :-)
>
> I feel tempted to state the same here, but I'd rather ask why
> some of Informix's top programmers have jumped ship and joined
> Orable? (According to what I've read recently in the trade
> papers)
>

M O N E Y

:-)

> -am

The secret is indeed: in not performing a lock on the row till you're
absolutely damn sure it's time to perform the update. Simply select
the data into the form, and save off the original information in a save
buffer. Allow the user to edit the form, and allow them to either update
or change their mind and restore the screen.

If the user is camped out on a row, and then decides to do something
else, no lock on the data base whilst they edit the row. If they
change their mind, abort the change, and refresh the screen with
the save-buffer information. The larger the OLTP organization of people
doing the work the greater the factor for people camping out on data.

Should they get around to actually updating the row, check first to
see if it *can* be done ( DECLARE CURSOR ... FOR UPDATE ) If the declare
comes back clean, by checking the SQLCA, perform the update. Otherwise
alert the user that the row is locked please wait.

My code generators do this. It's still no excuse to allow page-level
locking. Row-level locking in my not so humble opinion *is*
important. :-) If you have a highly active OLTP environment, you
should not introduce a page-level only data base into that environment
and add risk. I know a Sybase programmer who shared his "trick" for
working with this inherent problem. But it involves a stored procedure
which adds overhead, and I think the above numbering scheme for serial
numbers. More work than is really necessary, and a lot of wasted
space in the data base.

Those that argue against sloppy programming definitely are on the
right track. Programs should include the additional management of
checking the row to see if it can be updated, and only do the update
when it's absolutely time to do so. That should make it a
get-in-and-get-out proposition.

As far as Pablo's comments, I saw bait, and that silly smile with a
baloon on his head.

:-)

Timmy
--
Tim Schaefer \\|//
tsch...@mindspring.com 6 6
-------------------------------oOOo---( )---o00o----------------------
http://www.inxutil.com http://www.informix.com
http://www.iiug.org news://comp.databases.informix
mailto:majo...@iiug.org no subject body: subscribe linux-informix
======================================================================

Pablo Sanchez

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

>>>>> "Snorri" == Snorri Bergmann <sno...@strengur.is> writes:
Snorri>
Snorri> Pablo Sanchez wrote:
Snorri> [Stuff deleted]
Snorri>
Snorri> [ Snorri's submission ... ]

I should have said, only one submission. Hell, I got work
to do... although I'm on vacation.... so let me know which
is The Lucky One.

Please mail it to me 'cuz I'll be out this week and only in
for one day next.

thx!

Pablo Sanchez

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

>>>>> "Greg" == Greg <gr...@cyberramp.net> writes:
Greg>
Greg>
Greg> Same reason practically noone writes assembler anymore. Works great if
Greg> you futz with it enough. No one can afford to futz with it enough
Greg> though. It's just not economic to do so. Unfortunately for Sybase,
Greg> they're learning this the hard way.
Greg>

... let's make it easy... why is Sybase's TPC-C's higher
than Informix? They're both OLTP? As I said, row-level vs
page-level is hype.

Pablo Sanchez

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

>>>>> "Derek" == Derek Meyer <10026...@CompuServe.COM> writes:
Derek>
Derek> Sybase has plans to introduce row level locking in version 12
Derek> because applications actually do need it.
Derek>

It's not because of that... rather because of poorly
designed application's needing it and Sybase finally
knuckling under to The Big Company.

Michael Segel

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to Anthony Mandic


Anthony Mandic wrote:

> > 3) Swich to Informix :-)
>
> I feel tempted to state the same here, but I'd rather ask why
> some of Informix's top programmers have jumped ship and joined
> Orable? (According to what I've read recently in the trade
> papers)
>

> -am

Who said that they were the *top* programmers. :-)
Money is always a factor, so is work location, and sometimes
egos.

If I remember correctly, Informix bought Illustra around that time, if
you catch my drift.

-Mikey


Pablo Sanchez

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

>>>>> "Paul" == Paul Brown <pbr...@triplerock.Berkeley.EDU> writes:
Paul>
Paul> To quote from Jim Gray, (_Transaction_Processing_:_Concepts_and_
Paul> Techniques_ pp. 420-21);
Paul>

Paul,

Unless something has changed, aren't you still an Informix
employee (or employed in some fashion by them)? If so, then
of course you're going to post supporting literature...

Anyway, I think the answer lies with the TPC-C. I think
that they prove that row-level vs page-level locking is not
an issue. The -C's are a finely tuned OLTP application.

Jung S. Lee

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Derek Meyer wrote:

> Sybase has plans to introduce row level locking in version 12

> because applications actually do need it.

If application developers really understand how database works and work
closely with database specialists, by using page locking you can get
better performance than row level locking for read and write ios.

> --
> Derek Meyer Professional Services Ltd
> 55 Oakwood Road
> London NW11 6RJ
> derek...@compuserve.com

--
Jay Lee Lake Mary, FL

Scott C. Gray

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Derek Meyer wrote:
>
> Sybase has plans to introduce row level locking in version 12
> because applications actually do need it.

Actually, I could be wrong, but I believe the RLL release is
slated to be 11.9 (but, then again, that could be the
Java VM release I'm thinking of).

-scott

Scott C. Gray

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Pablo Sanchez wrote:
>
> >>>>> "Paul" == Paul Brown <pbr...@triplerock.Berkeley.EDU> writes:
> Paul>
> Paul> To quote from Jim Gray, (_Transaction_Processing_:_Concepts_and_
> Paul> Techniques_ pp. 420-21);
> Paul>
>
> Paul,
>
> Unless something has changed, aren't you still an Informix
> employee (or employed in some fashion by them)? If so, then
> of course you're going to post supporting literature...
>
> Anyway, I think the answer lies with the TPC-C. I think
> that they prove that row-level vs page-level locking is not
> an issue. The -C's are a finely tuned OLTP application.

I think we may have drifted into a small tangent here...I think
the real question that many developers are interested in (at least
consciencious (sp?) ones) is whether or not the page level
locking (PLL) concurrency solutions are a hinderance on performance
of the application.

It is a given, I believe, that RLL is an *easier* model to work
with--and IMHO, I think for some simple RAD type applications it
is great to have just so you don't need to concern yourself with
concurrency as much, which is why I am pleased to see it being added
to Sybase. But, the real question is, for the remainder of the
applications out there (which are pushing the server to its limits),
is there a situation in which a PLL concurrency solution is less
efficient than the associated RLL solution? (Note, I am not saying
that easier to implement, because it usually isn't). Or, better
yet, if you implemented PLL logic in an RLL database, would it
help, hinder, or have no effect on performance?

I cannot think of any application in which a properly designed
PLL solution would perform any worse than its RLL counter part
(but, as has been pointed out, it may very well take more
design effort to achieve these results). And, in fact, I think
that for a very large sub-set of these applications, performance
can actually improve using PLL.

This is what really gripes me about companies that sell cross-
vendor software that does not work effectively in a PLL
environment...with just a little bit of effort (which these
companies don't seem willing to invest) the application could
easily work across any vendor's environment, and may even
gain a slight performance improvement.

-scott

--
Scott C. Gray gr...@voicenet.com "my keybard is
brken"
Sybase Professional Services scott...@sybase.com
http://www.voicenet.com/~gray/sqsh.html

Paul Brown

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Pablo Sanchez (pa...@sgi.com) wrote:
: >>>>> "Paul" == Paul Brown <pbr...@triplerock.Berkeley.EDU> writes:
: Paul>
: Paul> To quote from Jim Gray, (_Transaction_Processing_:_Concepts_and_
: Paul> Techniques_ pp. 420-21);
: Paul>
:
: Unless something has changed, aren't you still an Informix

: employee (or employed in some fashion by them)? If so, then
: of course you're going to post supporting literature...

Yes. I am an Informix Employee. You're the keeper of the
Sybase FAQ. Religon counts for something.

When I'm speaking on behalf of Informix, I'll post from an
Informix account.

When I'm simply trying to contribute to a technical debate, and
don't want my affiliations to obscure my contributions, I
post from UCB. I said *nothing* positive or negative about
anyone. If you like, I can post references to the IBM
research (some of which is also attributed to Jim Gray) which
argues for your case.

The irony here is that I actually agree with you! A well
designed schema should require only page granularity. (Another
important topic unbroached in this thread concerns
lock escallation strategies.) But the practical reality is
that people don't always do 'the right thing'. This is the
'hard lesson' Gray refers to.

We all start out convinced, for the best technical reasons,
that page locking is all you need. Then you go to your first
customer who has a god-awful schema, a profound unwillingness
to change anything, and the demand that you get their hunk-o-junk
to work. (Paging SAP. Please pick up the white courtesy
phone . . )

: Anyway, I think the answer lies with the TPC-C. I think


: that they prove that row-level vs page-level locking is not
: an issue. The -C's are a finely tuned OLTP application.

TPC-C proves it isn't a technical problem. And it isn't. It's
purely an economic one.

Note: I do think that the case may change for extensible DBMSs.
Consider what goes on when you're doing query that includes an
expensive function (it may take a minute to complete). The case
for row level locking (even value level locking) becomes more
impressive.


Joel Garry

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

In article <347BCD...@agd.nsw.gov.au>,

Anthony Mandic <no_s...@agd.nsw.gov.au> wrote:
>Joel Garry wrote:
>
>> Anthony Mandic <no_s...@agd.nsw.gov.au> wrote:
>> >Perry Dillard wrote:
>> >
>> >> I'll admit that page level locking has its place, but row level is
>> >> absolutely essential.
>> >
>> > Since when? The level of locking should be transparent to
>> > well written applications. It only becomes an issue with
>> > poorly written ones.
>>
>> No matter how well written the application is, if the page is locked by
>> someone accessing a different row that happens to be on the page where the
>> row you want is, you won't get it. Telling people to make their records
>> bigger so there is only one on a page is a pretty hard sell. Convincing
>> programmers to write correctly is like herding cats. Good luck.
>
> Its the responsibility of the programmers regardless. If they
> were inept and incompetent they'd still produce crap. And end
> users would be quick to notice. Waiting for a lock is the same
> regardless of granularity. Thats why locks should be fast and
> only within the scope of the real transaction. Putting it
> around a select that may or may not eventually create a
> transaction is just plain dumb.

You'd better brush up on granularity. That is the whole problem with
page locking, you can get locked by something that you should not
care about.

The transaction may need to be long. What you are calling a "real" transaction
may be an artificial construct solely to deal with the page-locking problem.

Joel Garry

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

In article <347BD2...@agd.nsw.gov.au>,

Anthony Mandic <no_s...@agd.nsw.gov.au> wrote:
>Joel Garry wrote:
>
>> Anthony Mandic <no_s...@agd.nsw.gov.au> wrote:
>>
>> > They claim they need row level locking because they all stupidly
>> > lock selected data for the duration of a transaction that may or may
>> > not utilmately do an update or insert. The changes to correct this sort
>> > of nonsense are trivial. Why they won't do the change is obvious. These
>> > programs cater to the lowest common denominator in database technology.
>> > Thus they don't have a real grasp of proper client/server methodology.
>>
>> I agree with the lcd problem, but the changes are not trivial. Can you say
>> Repeatable Read? It's like changing the newsgroup comp.databases.oracle to
>> a hierarchy - the change is trivial, but just try to implement it completely.
>> Coming up on a year, and people still post here.
>
> This depends on where you're reading and posting from. :-)

Yes, but the point is you have to deal with where everyone is reading
and posting from.

> The solution that works for all cases (the lcd) is quite simple.
> The change may or may not be trivial depending on the complexity
> of the product. By complexity, I don't mean the schema design
> but the design of the programs that work with it. If they
> were well designed then the changes should be trivial.

Should be ain't is. :0

David Williams

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

In article <yuten43...@mew.corp.sgi.com>, Pablo Sanchez
<pa...@sgi.com> writes

>>>>>> "Greg" == Greg <gr...@cyberramp.net> writes:
>Greg>
>Greg>
>Greg> Same reason practically noone writes assembler anymore. Works great if
>Greg> you futz with it enough. No one can afford to futz with it enough
>Greg> though. It's just not economic to do so. Unfortunately for Sybase,
>Greg> they're learning this the hard way.
>Greg>
>
>... let's make it easy... why is Sybase's TPC-C's higher
>than Informix? They're both OLTP? As I said, row-level vs
>page-level is hype.

page level locks give HIGHER PERFORMANCE yes BItUT the reduce
concurrency and hence LOWER RELIABILITY. It's a tradeoff and
most of the time people go for reliability over a little more
performance. 90% of performance tuning is

a) reducing disk I/O by only selecting/updating the minimum
amount of data.
b) database schema / index usage.

anyway.


--
David Williams

David Williams

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

In article <347BCB...@agd.nsw.gov.au>, Anthony Mandic
<no_s...@agd.nsw.gov.au> writes

>Snorri Bergmann wrote:
>
>> Ok. Here is an easy one. Only involves 2 tables (master and detail).
>> This is of course very simlified and I have not checked the syntax :-)
>
>[DDL definitions snipped]
>
>> Now what the app does is it allows users to select a record from the
>> master table, get all child records from the detail table and allow the
>> users to add/delete/modify those records as needed. After the user exits
>> it updates the
>> master table by selecting the sum of cost from the detail table. The sql
>> statements involved are something like this: (assume everything is under
>> transaction control)
>>
>> -- We need to lock the master table so nobody else will update it while
>> we are.
>
> Here is the crux of your problem. "We need to lock".
> Its this mindset that doesn't allow you to see a better
> solution, so you let your DB server do it for you.
>
Thats's the idea, the DB server SHOULD do the locking. see below.

>> SELECT * FROM master WHERE id = ? FOR UPDATE;
>>
>> -- No need to lock the detail recs 'cause the master rec is locked.
>
> Yes, no need to lock. Its how you lock master thats the
> problem. I'll explain below.
>
>> SELECT * FROM detail WHERE id = ? ORDER BY line_no;
>>
>> -- After the user exits we update all detail recs, probably by deleting
>> them first, and then inserting whatever the user has typed in.
>
> What an overhead, delete followed by insert. An in situ
> update is far more efficient. Use it wherever possible.
>
It think he means say 10 detail rows, in the screen array the user
updates 5 , deletes 2, inserts 3. What SQL do you generate?

You could try to generate the needed inserts/updates/deleted but it's
too complex and error prone, just delete and insert.

>> -- Then we'll need to update the master table and release the lock.
>
> You can use a non-server generated lock instead. Then it
> just becomes a matter of updating the master table. The
> action of this update is also the action of releasing the
> lock. No other process is locked out of reading the record
> but others can't update or delete while that record is marked
> as being in use (provided that they observe the rules).
>

Oh, you mean soft locking where you have a database table that
contains the locks e.g.

create table lock_table
(
entity char(10) (effetively table name),
entity_key integer (primary key)
transaction_id (uses to remove locks for a given transaction just
before commiting).
)


OK what happens when you have commited entries into this table
(which you have to for other users to see them) and


a) you program SIGSEV's in the middle of a transaction with open
locks?
b) Someone does a kill -9 on your process when it has open locks?

c) A power failure occurs whilst you have open locks?

A database server will cleanup locks if a client dies or when
restarted if the server dies.

YOUR CLIENT PROGRAM CANNOT GUARANTEE LOCKS ARE RELEASED IF A CLIENT
PROGRAM DIES.

Also which is faster a

BEGIN WORK
INSERT INTO lock_table..
COMMIT
(including flushing 'dirty' data to disk and transaction logging

or the database server putting an entry into an in memory lock table?

>> If you have row level locking, the first select stmt will lock one row.
>> As the primary key is 4 bytes you can potentially put 4-500 keys on any
>> page (if you have 2K pages). In a page lock system (like Sybase and
>> MSSQL) the probability of a user trying to access one of the ~499 key
>> values another user has locked is pretty high.
>
> Thats the reason why server-generated locks should be fast,
> and not held for an indefinite period. the initial select
> should NOT be part of the transaction. This rule should
> apply to all locking methods, including row level.
>

Agreed, have a little as possibly in the transaction.

>> Page lock workarounds could be:
>>
>> 1) Make the primary key >1Kb, so you would only have one key per page.
>> (Disks are cheap, right?)
>
> Wrong approach. This is the BFI method favoured by those who
> just can't grasp the concepts.
>

Agreed.


>> 2) Use optimistic locking. (So users who have been typing in for hours
>> get the message: Somebody else has modified this rec while you were
>> working. Please try again).
>
> Same wrong approach.
>>

>> 3) Swich to Informix :-)
>
> I feel tempted to state the same here, but I'd rather ask why
> some of Informix's top programmers have jumped ship and joined
> Orable? (According to what I've read recently in the trade
> papers)
>
>-am

--
David Williams

Kyle Hachey

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

Chris Curvey wrote:
>
> > If you have row level locking, the first select stmt will lock one row.
> > As the primary key is 4 bytes you can potentially put 4-500 keys on any
> > page (if you have 2K pages). In a page lock system (like Sybase and
> > MSSQL) the probability of a user trying to access one of the ~499 key
> > values another user has locked is pretty high.
> >
>
> That depends. If you have a small number of master records,
> then the probability of a lock collision is high, but then you *can*
> extend
> the key to be 1K, since you don't need that much disk. If you have
> a large number of master records, then you can control the probability
> of a lock collision via a clustered index.
>
> > Page lock workarounds could be:
> >
> > 1) Make the primary key >1Kb, so you would only have one key per page.
> > (Disks are cheap, right?)


IMHO wasted space (i.e., filler fields) is more of a problem even if
disks are cheap. The bigger issue is cache management. If every
page/block could hold 4 records and is forced to hold only 1 now, you
need 4 times as much cache space. In addition, records are
significantly less likely to be in cache as a result of somebody else's
SQL statements. Ultimately, you end up with more physical versus
logical I/O or a require more memory to maintain the same level of
performance.

Kyle Hachey

Pablo Sanchez

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

>>>>> "Scott" == Scott C Gray <ickysp...@voicenet.com> writes:
Scott>
>> Anyway, I think the answer lies with the TPC-C. I think
>> that they prove that row-level vs page-level locking is not
>> an issue. The -C's are a finely tuned OLTP application.
Scott>
Scott> I think we may have drifted into a small tangent here...I think
Scott> the real question that many developers are interested in (at least
Scott> consciencious (sp?) ones) is whether or not the page level
Scott> locking (PLL) concurrency solutions are a hinderance on performance
Scott> of the application.

I agree that the crux of the matter is whether row-level vs
page-level is really an issue. That's why I stated the
-C's. The reason is because (even with its faults) the -C's
allow us to take a normalized 'application' and see how well
the RDBMS engine works. My point is that what we see is
that all the RDBMS engines are doing fine. Even if we take
the current high number for Informix: ~24k tpmC you are
doing *way* more tx's than you'll ever do on your app.
Comparing Sybase's current high of ~34k it is still way more
than you'll ever do... (for completeness, I believe that
Oracle's is around 55k ... I'm too lazy to check...)
Anyway, back to my point, since all the big RDBMS do well in
the -C's, then there really is little differences in
them... what it boils down to is what the CIO decided (ten
years ago) for your shop. The rest is hype.

Pablo Sanchez

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

>>>>> "Paul" == Paul Brown <pbr...@triplerock.Berkeley.EDU> writes:
Paul>
Paul> Pablo Sanchez (pa...@sgi.com) wrote:

Paul> : >>>>> "Paul" == Paul Brown <pbr...@triplerock.Berkeley.EDU> writes:
Paul> : Paul>
Paul> : Paul> To quote from Jim Gray, (_Transaction_Processing_:_Concepts_and_
Paul> : Paul> Techniques_ pp. 420-21);
Paul> : Paul>
Paul> :
Paul> : Unless something has changed, aren't you still an Informix
Paul> : employee (or employed in some fashion by them)? If so, then
Paul> : of course you're going to post supporting literature...
Paul>
Paul> Yes. I am an Informix Employee. You're the keeper of the
Paul> Sybase FAQ. Religon counts for something.

I'm the keeper of the Sybase FAQ however I'm also aware
of the benefits in certain areas of Informix and Oracle.
Heck, I'm not blinded by religion. I have posted to
c.d.s. with my gripes regarding Sybase. You can search
dejanews for 'em:

o isolation level 0 implementation - crap
o ...

Anyway, I may be the maintainer but I try not to live my
life with a pair of rosey glasses.

Ask how Oracle does serializability...

Paul> When I'm simply trying to contribute to a technical debate, and
Paul> don't want my affiliations to obscure my contributions, I
Paul> post from UCB. I said *nothing* positive or negative about
Paul> anyone. If you like, I can post references to the IBM
Paul> research (some of which is also attributed to Jim Gray) which
Paul> argues for your case.

To be fair, that would have complemented your initial post well.

Paul> The irony here is that I actually agree with you!

I know, I know... I was hoping for more static from ya... :-)

Paul> A well
Paul> designed schema should require only page granularity. (Another
Paul> important topic unbroached in this thread concerns
Paul> lock escallation strategies.)

Which Sybase *used* suck at... *finally* system 11 handles
it right....

Paul> But the practical reality is
Paul> that people don't always do 'the right thing'. This is the
Paul> 'hard lesson' Gray refers to.

Unfortunately what you say is true... this is why Oracle's
versionings works so well... they appeal to the general
masses who don't do things right. I guess therein lies opportunity.

Pablo Sanchez

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

>>>>> "David" == David Williams <d...@smooth1.demon.co.uk> writes:
David>
David> page level locks give HIGHER PERFORMANCE yes BItUT the reduce
David> concurrency and hence LOWER RELIABILITY.

How do you assert "reduced concurrency" when the unit of
measure is tpmC: transactions per minute of type C's?

Informix... 24,000 transactions per minute of type C
Sybase..... 34,000 transactions per minute of type C

I believe your assertion is wrong.

David> most of the time people go for reliability over a little more
David> performance. 90% of performance tuning is
David>
David> a) reducing disk I/O by only selecting/updating the minimum
David> amount of data.

agreed

David> b) database schema / index usage.

agreed

c) proper hardware for both the client and the
server
d) configuring the server properly:
- log on its own device (sequential writes in
nature)
- data on its own set of devices due to its
mostly random access nature for OLTP
e) proper network bandwidth and proper use of
packages
f) ...

Pablo Sanchez

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

>>>>> "Steve" == Steve Cogorno <cog...@netcom.com> writes:
Steve>
Steve> You have to admit that page locks *are* more efficient than row locks. So
Steve> for performance reasons, you may not want to use a row lock all the time.
Steve> "Data page only" is a combination between the two: the server locks entire
Steve> pages, but INDEX pages get row-level locks.
Steve> --

Hence the beauty that Informix and Oracle offer: choice.
Not that I agree that for row-level vs. page-level it's
necessary (for properly tuned app's) but choice is
important.

Since I slammed Sybase on dirty reads I'll slam Informix and
Oracle for not having enough knobs to tune shared memory:

o partitionable data buffer
o binding objects to said partitions

see? I'm equal opportunity slammer!!!

Gary Kuever

unread,
Nov 26, 1997, 3:00:00 AM11/26/97
to

>Yes, with enough finagling, sharp application programmers (is this an
>oxymoron?...he says...ducking for cover...) can reduce locking
>contention within pages, perhaps to the point of tolerability in a large
>OLTP system.


1 - I've never been involved in an OLTP system that had a problem with page
vs row level. The largest one so far was 25GB OLTP, 250 users. No deadlocks,
no hairy training of programmers, just normal understanding of set theory
and transactions.

2 - On the minus side of row locking, it encourages developers to continue
with a record oriented mentality instead of a set mentality. An example is
Oracle's row_id. I've seen this on every Oracle project, i.e. the shortcut
is taken instead of thinking the set operation through properly.

Gary

Steve Cogorno

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

In article <u2#ftdm#8GA...@nih2naab.prod2.compuserve.com> Derek Meyer <10026...@CompuServe.COM> writes:
>Sybase has plans to introduce row level locking in version 12
>because applications actually do need it.

Ah, but 12 will have more than just row-level locks added. I believe there
is another locking method called "Data page only" or something like that.

You have to admit that page locks *are* more efficient than row locks. So

for performance reasons, you may not want to use a row lock all the time.

"Data page only" is a combination between the two: the server locks entire

pages, but INDEX pages get row-level locks.

--

Steve
cog...@netcom.com

Anthony Mandic

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

Gary Kuever wrote:

> 1 - I've never been involved in an OLTP system that had a problem with page
> vs row level. The largest one so far was 25GB OLTP, 250 users. No deadlocks,
> no hairy training of programmers, just normal understanding of set theory
> and transactions.
>
> 2 - On the minus side of row locking, it encourages developers to continue
> with a record oriented mentality instead of a set mentality. An example is
> Oracle's row_id. I've seen this on every Oracle project, i.e. the shortcut
> is taken instead of thinking the set operation through properly.

Precisely. I see this a lot with products designed for any backend.
The unfortunate thing is that usually the wrong backend is chosen
for development. The end results are well known.

-am

Anthony Mandic

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

Jim Smith wrote:

> I wouldn't go as far as essential but it certainly helps. I would guess that
> most applications do not have serious
> concurrency problems caused by locking, but for those that do the fact that
> a single row update locks one row as against many
> rows must help.

It would help only if the application wasn't developed with a
clear understanding of the issues and processes involved. Locking
should be fast. Only lock when you need to within the scope of the
real transaction. This should be fast. And this type of lock would
be transitory.

> I can't speak about Informix, but the Sybase locking model where readers block
> writers is probably more of a constraint
> (compared to Oracle ) than the page/row difference.

Only when the readers hold their lock for longer than it takes to
do the actual read. The concept of holding a read lock until a
potential insert/update transaction is specious.

> > Since when? The level of locking should be transparent to
> > well written applications. It only becomes an issue with
> > poorly written ones.
>

> This is the techie answer which frequently comes up with Sybase.

Maybe, but its the correct one.

> An alternative reading is that you have to code round
> limitations. You have to do that with any system of course
> (except for my new application development tool Magic(tm)), but it
> seems to come up more often with Sybase.

Being forced to think of a better design pays off better
in the long run. I suspect that any application developed
on Sybase first would be inherently more portable to other
backends than vice-versa.

> Either way, if you always have to employ top-flight programmers, then
> that increases the cost of ownership.

I don't buy that argument. Having at least one superman in
a bunch of turkeys may cost slightly more up front, but it
works out cheeper in the long run. Think if you had no
top-flighters. How much longer would your project take
with just the turkeys trying to get the work done? What
would you end up with? How many problems would you have
to resolve after deployment because none of the turkeys
had the nouce and experience to know better. I've seen it
happen time and time again over more years than I'd care to
remember. How much more does it cost to have to rewrite a
badly designed and written app than to do it properly the
first time? How many projects fail? This has been well
documented by researchers. You NEED people who know
what their doing.

-am

Anthony Mandic

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

Snorri Bergmann wrote:

> > You can use a non-server generated lock instead. Then it
> > just becomes a matter of updating the master table. The
> > action of this update is also the action of releasing the
> > lock. No other process is locked out of reading the record
> > but others can't update or delete while that record is marked
> > as being in use (provided that they observe the rules).
>

> I have never heard before of "non-server generated locks". Could you
> explain to me what they are and how they work?

Basically its the application that generates its own lock
by either setting a flag in a key table or making a row
entry in a dedicated table to signal its intentions to
others that it intend to alter data. The main drawback is
that all others have to observe the same practice (for
obvious reasons). The advantages are that no server locks
are generated until the real transaction, so others can
still read the data if required. If an app dies this
kind of lock is retained so its useful for debugging,
and if a server dies all the details would still be
there (server locks would disappear). You can also
record other useful information in this 'lock table'
row. I.e. who, when, etc.

-am

Anthony Mandic

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

Pablo Sanchez wrote:

> Since I slammed Sybase on dirty reads I'll slam Informix and
> Oracle for not having enough knobs to tune shared memory:
>
> o partitionable data buffer
> o binding objects to said partitions
>
> see? I'm equal opportunity slammer!!!

It would have been more effective if you'd crossposted
then :-)

-am

Anthony Mandic

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

Tim Schaefer wrote:

> I wrote:
> > > 3) Swich to Informix :-)
> >
> > I feel tempted to state the same here, but I'd rather ask why
> > some of Informix's top programmers have jumped ship and joined
> > Orable? (According to what I've read recently in the trade
> > papers)
> >
>
> M O N E Y

It must have been a lot to be so easily seduced. Either that or
Informix weren't paying well. The impression I got was that Informix
are in trouble. Can anyone confirm or deny?



> The secret is indeed: in not performing a lock on the row till you're
> absolutely damn sure it's time to perform the update. Simply select
> the data into the form, and save off the original information in a save
> buffer. Allow the user to edit the form, and allow them to either update
> or change their mind and restore the screen.

[snip]

> Should they get around to actually updating the row, check first to
> see if it *can* be done ( DECLARE CURSOR ... FOR UPDATE ) If the declare
> comes back clean, by checking the SQLCA, perform the update. Otherwise
> alert the user that the row is locked please wait.

They'd still have to signal their intent so that the data
doesn't get changed underneath them in the intrim. Using a
server lock for this is the wrong approach. Setting a flag
in a key table or making an entry somewhere else (with no held
lock) is better.

> My code generators do this. It's still no excuse to allow page-level
> locking. Row-level locking in my not so humble opinion *is*
> important. :-) If you have a highly active OLTP environment, you
> should not introduce a page-level only data base into that environment
> and add risk. I know a Sybase programmer who shared his "trick" for
> working with this inherent problem. But it involves a stored procedure
> which adds overhead, and I think the above numbering scheme for serial
> numbers. More work than is really necessary, and a lot of wasted
> space in the data base.

I had worked with a Sybase setup that used a "lock" table.
The environment was highly active. Absolutely everything
was done via stored procedures (for added performance). Making
one extra call at the start to record a "lock" and one at the
end to delete it was a negligible overhead. Space usage was
negligible too. At the close of each day no space was used
(apart from the extra transactions recorded in the log).

> Those that argue against sloppy programming definitely are on the
> right track. Programs should include the additional management of
> checking the row to see if it can be updated, and only do the update
> when it's absolutely time to do so. That should make it a
> get-in-and-get-out proposition.
>
> As far as Pablo's comments, I saw bait, and that silly smile with a
> baloon on his head.

Yeah, the pretzelhead fools everyone.

-am

Anthony Mandic

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

Michael Segel wrote:

>
> Anthony Mandic wrote:
>
> > I feel tempted to state the same here, but I'd rather ask why
> > some of Informix's top programmers have jumped ship and joined
> > Orable? (According to what I've read recently in the trade
> > papers)
>
> Who said that they were the *top* programmers. :-)

The trade papers (not that they know anything 99.99% of the time).

> If I remember correctly, Informix bought Illustra around that time, if
> you catch my drift.

Ah, the net closes in.

-am

Johan Andersson

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

In article <yut90ub...@mew.corp.sgi.com>, pa...@sgi.com says...

>
>>>>>> "Paul" == Paul Brown <pbr...@triplerock.Berkeley.EDU> writes:
>
>Paul> But the practical reality is
>Paul> that people don't always do 'the right thing'. This is the
>Paul> 'hard lesson' Gray refers to.
>
>Unfortunately what you say is true... this is why Oracle's
>versionings works so well... they appeal to the general
>masses who don't do things right. I guess therein lies opportunity.
>
A reflection from a member of the 'general masses'...

Why is Oracle's versionings bad?
Being able to get the response I would have gotten had my query been
instantaneous, without needing to fear read locks or other clients changes
during the time my query is running, is something I consider a 'good thing'.
It is the feature highest on my Wish-list for Informix.

With regards to row/page locks.
This all boils down to concurrency, the finer granularity the better
concurrency. If we had value locks we would have the potential for even better
concurrency than today. This all comes at a price of course, but I would like
the option to choose, not be restricted by the technical shortcomings of the
database system.

There is a related issue here, long transactions. I believe Mr Bergmann put
his finger on it when he wrote:

In article <347B239E...@strengur.is>, sno...@strengur.is says...


>
>Page lock workarounds could be:
>

>[....]


>
>2) Use optimistic locking. (So users who have been typing in for hours
>get the message: Somebody else has modified this rec while you were
>working. Please try again).
>

If long transactions are allowed, then the database system can be used to
support a pessimistic locking scheme.

This relates to row level locking because a common solution today is to map
'objects' of some sort into one or more rows in a database(1). When a user
changes an object in an interactive process taking everything from seconds to
hours, the user must be guaranteed that the changes can be made (This is
Snorris point I believe). The easiest way to guarantee this is to use
pessimistic locking.

My point here is that in a modern system, the user is all to often a part of
the transaction, we can not ignore that. When making the choice, as an
application developer, of being nice to the user or the database, I know who
comes out on top :-)


/Johan Andersson

(1) Object Oriented modeling and design / James Rumbaugh, ISBN 0-13-630054-4

--
________________________________________________________________________
| >>> The opinions herein are mine and not neccessarily my employers <<< |
| Johan Andersson, Msc CSE j...@carmenta.se |
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~


Anthony Mandic

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

David Williams wrote:

>
> Anthony Mandic writes:
>
> >> -- After the user exits we update all detail recs, probably by deleting
> >> them first, and then inserting whatever the user has typed in.
> >
> > What an overhead, delete followed by insert. An in situ
> > update is far more efficient. Use it wherever possible.
> >
> It think he means say 10 detail rows, in the screen array the user
> updates 5 , deletes 2, inserts 3. What SQL do you generate?

I remember when I started of years ago and I used to do this
(until I learnt better). Well, since its SQL you're generating
you should be able to do it a row (and lock) at a time.

> You could try to generate the needed inserts/updates/deleted but it's
> too complex and error prone, just delete and insert.

I disagree. Its not hard, unless you're lasy or incompetent.
From a cost point of view, its also more cost effective.

> Oh, you mean soft locking where you have a database table that
> contains the locks e.g.

Yep.

> create table lock_table
> (
> entity char(10) (effetively table name),

object_key interger,
user_id ...

You only need to identify the core table's key.

> entity_key integer (primary key)
> transaction_id (uses to remove locks for a given transaction just
> before commiting).
> )
>
> OK what happens when you have commited entries into this table
> (which you have to for other users to see them) and
>
> a) you program SIGSEV's in the middle of a transaction with open
> locks?

And another user says "Why has this object been locked for
N hours?". Its the same as when someone sets a server lock
and goes off to lunch and lock's their PC's screen. At least
you can find the culprit "soft lock" and fix things up.

> b) Someone does a kill -9 on your process when it has open locks?

Same thing. You can clear it. Its not that big a deal.

> c) A power failure occurs whilst you have open locks?

Same again. You can list all of them and generate a report.
If the same happens to a server with server locks, how do you
know who was working on what? In other words, you'd either
have just one big server transaction or start worrying about
data consistency. At leat with "soft locks" you have some
hope of tracking them down.



> A database server will cleanup locks if a client dies or when
> restarted if the server dies.

Yes, unfortunately, and you're stuck if you need to track
down who was doing what to what/who.

> YOUR CLIENT PROGRAM CANNOT GUARANTEE LOCKS ARE RELEASED IF A CLIENT
> PROGRAM DIES.

I wouldn't want it to. The 'soft lock' is a useful debugging tool
in this very case. How do you debug your client apps otherwise
if a user says "Oh, it just died.". Most users can't tell you
anything useful, so you have to rely on your wits. Knowing
where they were up to tells you where the app was when it died.
This feature helped me on numerous occasions in fixing a badly
written application.

> Also which is faster a
>
> BEGIN WORK
> INSERT INTO lock_table..
> COMMIT
> (including flushing 'dirty' data to disk and transaction logging
>
> or the database server putting an entry into an in memory lock table?

An extra insert and delete doesn't hurt. I think the overall
advantages outweight the disadvantages. Sure its pain to have
to delete stale ones that got left around after a crash and
don't mark an active transaction, but then users can delete
them themselves with a well-written app.

> > Thats the reason why server-generated locks should be fast,
> > and not held for an indefinite period. the initial select
> > should NOT be part of the transaction. This rule should
> > apply to all locking methods, including row level.
> >
> Agreed, have a little as possibly in the transaction.

Do you mean have little data or just keep the transaction to as
short as it should be?

-am

Anthony Mandic

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

Joel Garry wrote:

> Anthony Mandic wrote:
>
> > Its the responsibility of the programmers regardless. If they
> > were inept and incompetent they'd still produce crap. And end
> > users would be quick to notice. Waiting for a lock is the same
> > regardless of granularity. Thats why locks should be fast and
> > only within the scope of the real transaction. Putting it
> > around a select that may or may not eventually create a
> > transaction is just plain dumb.
>
> You'd better brush up on granularity. That is the whole problem with
> page locking, you can get locked by something that you should not
> care about.

Only if that lock was held for longer than it ought to have been.

> The transaction may need to be long. What you are calling a "real" transaction
> may be an artificial construct solely to deal with the page-locking problem.

No, I've been implying that "read for update" or "select with lock"
or whatever its called is the wrong approach. The read/select
should not be part of the real transaction. They could be held
indefinitely, in theory. The real tansaction is the
insert/update/delete.
These should be fast. If there are a lot of them in one hit, I'd
consider rethinking the approach used to design the app that
does this. The implication here is that the app may be far too
complex, thus being more vulnerable to problems. Keeping it
simple never hurts.

-am

Jean-Marc van Leerdam

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

First, please excuse me if some of my arguments have already been
covered in this thread, our newsserver was out for a week and I
haven't seen all of it.

Anthony Mandic <no_s...@agd.nsw.gov.au> wrote:
>Joel Garry wrote:
>>

>> That is the whole problem with
>> page locking, you can get locked by something that you should not
>> care about.

> Only if that lock was held for longer than it ought to have been.

Regardless, I think one should NOT be troubled by things being done to
records that are of no concern to that user. With PLL you can (and
will) be troubled in such a way.
I think, even in a set-based thinking environment, it should not be
the database that causes these problems. I strongly support the
availability of RLL (even if sometimes PLL would be a performance
advantage, I still want the option of RLL without cumbersome tricks or
fake identifiers).

>> The transaction may need to be long. What you are calling a "real" transaction
>> may be an artificial construct solely to deal with the page-locking problem.

> No, I've been implying that "read for update" or "select with lock"
> or whatever its called is the wrong approach. The read/select
> should not be part of the real transaction. They could be held
> indefinitely, in theory. The real tansaction is the
>insert/update/delete.

I totally agree. But this is not a valid argument IMHO, it just
reduces the timespan in which locking occurs, it doesn't alter the
'theoretical' premises.

> These should be fast. If there are a lot of them in one hit, I'd
> consider rethinking the approach used to design the app that
> does this. The implication here is that the app may be far too
> complex, thus being more vulnerable to problems. Keeping it
> simple never hurts.

Just 'blaming' the apps complexity is no real solution too, I think we
should see the apps complexity and way of operating as a given.
Just because a RDBMS acts on sets and likes set-like operations, that
doesn't mean the customer does the same or should be trained to do so
(that sets us back some 30 years, when we still wanted to adapt the
users to the programs instead of the other way around).

IMHO a RDBMS should let the user operate on rows, completely
independent of the status and concurrent use of 'nearby' rows by other
users.
That implies RLL in my opinion.

Also, in a set-based thinking environment: why is some element
'closer' to another element than another element, and why does
modifying one element influence the success of modifications to SOME
other elements but not to ALL other elements?
Elements in a set should IMO be independent objects.

I look at PLL as if the operating system would not let me change
foo.txt in directory c:\tmp, just because someone else is currently
updating foo1.txt in c:\tmp. That's absurd.

I think RDBMS suppliers should work hard to make RLL or even VLL as
efficient as 'easy' PLL schemes that currently outperform RLL schemes.
Just because the suppliers do it don't make it right!

Jean-Marc.
+------------------------------------------------------------+
|Jean-Marc.van.Leerdam@| All opinions expressed are just ... |
|ingbank.com | opinions (and my personal ones!). |
+-- (AntiSpam:note the xxremovexx in the reply-to address) --+


Pablo Sanchez

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

>>>>> "Gary" == Gary Kuever <gku...@ix.netcom_remove_this_.com> writes:
Gary>
Gary> 2 - On the minus side of row locking, it encourages developers to continue
Gary> with a record oriented mentality instead of a set mentality. An example is
Gary> Oracle's row_id. I've seen this on every Oracle project, i.e. the shortcut
Gary> is taken instead of thinking the set operation through properly.
Gary>

... and of course you end up with crappier performance with
the row at a time vs set based approach.

I was doing a bench for a customer and they were doing some
nightly processing... looking at their code, I saw that one
section was taking 2.5 hours. I rewrote it and that same
section took a couple of minutes.

What was the magic? Converted their row at a time logic to
set based logic.

Pablo Sanchez

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

>>>>> "Anthony" == Anthony Mandic <no_s...@agd.nsw.gov.au> writes:
Anthony>
>> As far as Pablo's comments, I saw bait, and that silly smile with a
>> baloon on his head.
Anthony>
Anthony> Yeah, the pretzelhead fools everyone.
Anthony>

It's always the last thing they see before BOOM! :-)

Pablo Sanchez

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

>>>>> "Johan" == Johan Andersson <j...@carmenta.se> writes:
Johan>
Johan> In article <yut90ub...@mew.corp.sgi.com>, pa...@sgi.com says...

>>
>>>>>>> "Paul" == Paul Brown <pbr...@triplerock.Berkeley.EDU> writes:
>>
Paul> But the practical reality is
Paul> that people don't always do 'the right thing'. This is the
Paul> 'hard lesson' Gray refers to.
>>
>> Unfortunately what you say is true... this is why Oracle's
>> versionings works so well... they appeal to the general
>> masses who don't do things right. I guess therein lies opportunity.
>>
Johan> A reflection from a member of the 'general masses'...

:-)

Johan> Why is Oracle's versionings bad?

I didn't say it was bad, what I'm saying is that this allows
for sloppy programming. When applications are done right,
there's no need to do the versioning.

Johan> Being able to get the response I would have gotten
Johan> had my query been instantaneous, without needing to
Johan> fear read locks or other clients changes during the
Johan> time my query is running, is something I consider a
Johan> 'good thing'.

I understand what you're saying... look at it in the
following light:

If the total application (not just your part) was
done well, there wouldn't be a need for the
versioning. Also, in that case, you could use
Informix/Oracle/Sybase to do the work with no
issues. Instead, IMHO, versioning allows sloppy
programming. It allows weenies like me to stay in
business because when the performance problems come,
I know where to look. I guess that's okay... but
some times I like to be altruistic and educate
folks to help themselves. My approach must suck
because I have a knack of making folks rabid.

Johan> With regards to row/page locks. This all boils down
Johan> to concurrency, the finer granularity the better
Johan> concurrency.

As I've mentioned before, if the application is finely
tuned, then it's a non-issue.

Johan> If we had value locks we would have the potential for even better
Johan> concurrency than today. This all comes at a price of course, but I would like
Johan> the option to choose, not be restricted by the technical shortcomings of the
Johan> database system.

It's not the RDBMS that has the shortcoming but rather
sloppy application writers... it's been proven with the
normalized application (TPC-C's) that row-level vs
page-level is not an issue.

David Williams

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

In article <yut67pf...@mew.corp.sgi.com>, Pablo Sanchez
<pa...@sgi.com> writes

>>>>>> "David" == David Williams <d...@smooth1.demon.co.uk> writes:
>David>
>David> page level locks give HIGHER PERFORMANCE yes BItUT the reduce
>David> concurrency and hence LOWER RELIABILITY.
>
>How do you assert "reduced concurrency" when the unit of
>measure is tpmC: transactions per minute of type C's?
>
> Informix... 24,000 transactions per minute of type C
> Sybase..... 34,000 transactions per minute of type C
>
These can't have been from clients both updating rows on the same page
at the same time..

>I believe your assertion is wrong.
>
>David> most of the time people go for reliability over a little more
>David> performance. 90% of performance tuning is
>David>
>David> a) reducing disk I/O by only selecting/updating the minimum
>David> amount of data.
>
>agreed
>
>David> b) database schema / index usage.
>
>agreed
>
> c) proper hardware for both the client and the
> server
> d) configuring the server properly:
> - log on its own device (sequential writes in
> nature)
> - data on its own set of devices due to its
> mostly random access nature for OLTP
> e) proper network bandwidth and proper use of
> packages
> f) ...

--
David Williams

David Williams

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

In article <347D44...@agd.nsw.gov.au>, Anthony Mandic
<no_s...@agd.nsw.gov.au> writes

>David Williams wrote:
>>
>> Anthony Mandic writes:
>>
>> >> -- After the user exits we update all detail recs, probably by deleting
>> >> them first, and then inserting whatever the user has typed in.
>> >
>> > What an overhead, delete followed by insert. An in situ
>> > update is far more efficient. Use it wherever possible.
>> >
>> It think he means say 10 detail rows, in the screen array the user
>> updates 5 , deletes 2, inserts 3. What SQL do you generate?
>
> I remember when I started of years ago and I used to do this
> (until I learnt better). Well, since its SQL you're generating
> you should be able to do it a row (and lock) at a time.
>
But what if the user wants to be able to undo all the changes they
made since they entered the screen?

Users goes into screen, deletes 20 rows and then want to undo
everything. If you commit after each delete you cannot undo it,
if you don;t commit you hold locks...

onstat -k, lists locks, tables, rowids, sessionids.

>> b) Someone does a kill -9 on your process when it has open locks?
>
> Same thing. You can clear it. Its not that big a deal.
>

It is.. I've had users complain the locks are held and call the system
CRAP. The users demand a tool to clear the locks. Then they

a) say why can't the application clear the locks automatically. If
informix does I don't see why your application can't.

b) how do DBA indentify stale locks.

c) DBAs delete the wrong users locks...then things get really
interesting when two users get the ability to update the same rows.
They both hit the "DO IT" key at different times and one user
"loses" their updates. Then you get support calls when a user
complains of bugs in the application as it "failed to update the
data and gives no warning". You can't reproduce the bug and users
say they can't reproduce it on demand as "it only happens
occasionally".

d) DBA's say "why do we have to bother clearing out locks each time
we reboot Online? (This from a site with one crash per day for 6
months before Informix supplied a bug fix).

>> c) A power failure occurs whilst you have open locks?
>
> Same again. You can list all of them and generate a report.
> If the same happens to a server with server locks, how do you
> know who was working on what? In other words, you'd either
> have just one big server transaction or start worrying about
> data consistency. At leat with "soft locks" you have some
> hope of tracking them down.
>

onstat -k list locks.

>
>> A database server will cleanup locks if a client dies or when
>> restarted if the server dies.
>
> Yes, unfortunately, and you're stuck if you need to track
> down who was doing what to what/who.
>
>> YOUR CLIENT PROGRAM CANNOT GUARANTEE LOCKS ARE RELEASED IF A CLIENT
>> PROGRAM DIES.
>
> I wouldn't want it to. The 'soft lock' is a useful debugging tool
> in this very case. How do you debug your client apps otherwise
> if a user says "Oh, it just died.". Most users can't tell you
> anything useful, so you have to rely on your wits. Knowing
> where they were up to tells you where the app was when it died.

So does Informix 4GL, with startlog() function, all errors are t
trapped by the Informix runtime library and logged.

Or you can trap them yourself with WHENEVER ERROR CALL do_panic()
and a do_panic functon that calls the Informix errorlog() function
to log errors.

If the error is bad your application SEGV's of course you get a core
dump.

> This feature helped me on numerous occasions in fixing a badly
> written application.
>
>> Also which is faster a
>>
>> BEGIN WORK
>> INSERT INTO lock_table..
>> COMMIT
>> (including flushing 'dirty' data to disk and transaction logging
>>
>> or the database server putting an entry into an in memory lock table?
>
> An extra insert and delete doesn't hurt. I think the overall
> advantages outweight the disadvantages. Sure its pain to have
> to delete stale ones that got left around after a crash and
> don't mark an active transaction, but then users can delete
> them themselves with a well-written app.
>
>> > Thats the reason why server-generated locks should be fast,
>> > and not held for an indefinite period. the initial select
>> > should NOT be part of the transaction. This rule should
>> > apply to all locking methods, including row level.
>> >
>> Agreed, have a little as possibly in the transaction.
>
> Do you mean have little data or just keep the transaction to as
> short as it should be?

Both, update as few rows and columns as possible and avoid selects
within a transaction.
>
>-am

--
David Williams

David Williams

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

In article <65jj2c$jh...@news.ing.nl>, Jean-Marc van Leerdam <Jean-
Marc.van...@xxremovexx.ingbank.com> writes

>
>I think, even in a set-based thinking environment, it should not be
>the database that causes these problems. I strongly support the

Correct. Sets contain tuples (rows). Sets do not contain
'pages' of tuples!!

Set based = tuple based = row based.

If you involve pages you involve the physical database design,
some the relational model is trying to abstract away.

Otherwise we'd all still be handling disk sectors and pointers
from one disk structure to another...

RELATIONAL THEORY IS TUPLE (i.e. row) BASED!!

>Jean-Marc.
>+------------------------------------------------------------+
>|Jean-Marc.van.Leerdam@| All opinions expressed are just ... |
>|ingbank.com | opinions (and my personal ones!). |
>+-- (AntiSpam:note the xxremovexx in the reply-to address) --+
>

--
David Williams

David Williams

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

In article <yut90ub...@mew.corp.sgi.com>, Pablo Sanchez
<pa...@sgi.com> writes

>>>>>> "Paul" == Paul Brown <pbr...@triplerock.Berkeley.EDU> writes:
>Paul>
>Paul> Pablo Sanchez (pa...@sgi.com) wrote:
>Paul> : >>>>> "Paul" == Paul Brown <pbr...@triplerock.Berkeley.EDU> writes:
>Paul> : Paul>
>Paul> : Paul> To quote from Jim Gray, (_Transaction_Processing_:_Concepts_and_
>Paul> : Paul> Techniques_ pp. 420-21);
>Paul> : Paul>
>Paul> :
>Paul> : Unless something has changed, aren't you still an Informix
>Paul> : employee (or employed in some fashion by them)? If so, then
>Paul> : of course you're going to post supporting literature...
>Paul>
>Paul> Yes. I am an Informix Employee. You're the keeper of the
>Paul> Sybase FAQ. Religon counts for something.
>
>I'm the keeper of the Sybase FAQ however I'm also aware
>of the benefits in certain areas of Informix and Oracle.

I'm the new keeper of the Informix FAQ and I agree with almost
all of your points (altough I hate Oracle). Sybase is a close no 2.
in my book!

>Heck, I'm not blinded by religion. I have posted to
>c.d.s. with my gripes regarding Sybase. You can search
>dejanews for 'em:
>
> o isolation level 0 implementation - crap
> o ...
>
>Anyway, I may be the maintainer but I try not to live my
>life with a pair of rosey glasses.
>
>Ask how Oracle does serializability...
>

Yeach!

>Paul> When I'm simply trying to contribute to a technical debate, and
>Paul> don't want my affiliations to obscure my contributions, I
>Paul> post from UCB. I said *nothing* positive or negative about
>Paul> anyone. If you like, I can post references to the IBM
>Paul> research (some of which is also attributed to Jim Gray) which
>Paul> argues for your case.
>
>To be fair, that would have complemented your initial post well.
>
>Paul> The irony here is that I actually agree with you!
>
>I know, I know... I was hoping for more static from ya... :-)
>
>Paul> A well
>Paul> designed schema should require only page granularity. (Another
>Paul> important topic unbroached in this thread concerns
>Paul> lock escallation strategies.)
>
>Which Sybase *used* suck at... *finally* system 11 handles
>it right....
>

>Paul> But the practical reality is
>Paul> that people don't always do 'the right thing'. This is the
>Paul> 'hard lesson' Gray refers to.
>
>Unfortunately what you say is true... this is why Oracle's
>versionings works so well... they appeal to the general
>masses who don't do things right. I guess therein lies opportunity.

Agreed, I think we both dislike Oracle's approach!

PS As a reader of c.d.s perhaps we should pool or OS knwoledge
after all Informix/Sybase both demand similar things from an OS....

--
David Williams

Pablo Sanchez

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to d...@smooth1.demon.co.uk

>>>>> "David" == David Williams <d...@smooth1.demon.co.uk> writes:
David>
>> Informix... 24,000 transactions per minute of type C
>> Sybase..... 34,000 transactions per minute of type C
>>
David> These can't have been from clients both updating rows
David> on the same page at the same time..

Of course they are... see my point?

Now mind you, I believe that Informix will publish numbers
greater than those above at some point and so will Sybase
and so will Oracle... see what I'm getting to?

Pablo Sanchez

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

>>>>> "David" == David Williams <d...@smooth1.demon.co.uk> writes:
David>
David> RELATIONAL THEORY IS TUPLE (i.e. row) BASED!!
David>

Well, sorta... that is neither here nor there though when it
comes to lock management.

Pablo Sanchez

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

>>>>> "David" == David Williams <d...@smooth1.demon.co.uk> writes:
David>
David> I'm the new keeper of the Informix FAQ and I agree with almost

Welcome... you should consider joining the FAQ-maintainers
list. Let me know if you need this information (pa...@sgi.com).

David> all of your points (altough I hate Oracle).

I don't hate Oracle. I do call it The Beast. I do not like
their marketing... it's really cut throat. I cannot have an
opinion on one RDBMS over another... we're a hardware
company and if your shop wants Informix & SGI, I'm there
with ya... if your shop wants Sybase ... anyway, you get my
drift.

>>
>> Ask how Oracle does serializability...
>>

David> Yeach!

Yeach is right! Can you say *table locks*?

David> PS As a reader of c.d.s perhaps we should pool or OS knwoledge
David> after all Informix/Sybase both demand similar things from an OS....

Certainly. I want to learn more about Informix. I would be
a fool to claim that I know it well. Please feel free to
contact me when you get a chance... I'm on vaca this week
(can't ya tell? :-)) and next week is hell (back in Mtn
View) and the week after next is (sh!) s'more Oracle
training.

Pablo Sanchez

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

>>>>> "Anthony" == Anthony Mandic <no_s...@agd.nsw.gov.au> writes:
Anthony>
Anthony> Those soiled nappies can make a real mess,
Anthony> beats a wet towel anyday ;-)
Anthony>

ha ha ha ha!!!

Pablo Sanchez

unread,
Nov 27, 1997, 3:00:00 AM11/27/97
to

>>>>> "Anthony" == Anthony Mandic <no_s...@agd.nsw.gov.au> writes:
Anthony>
Anthony> Except in the case where one user has one row locked with RLL
Anthony> and another users want a summery of all rows including the
Anthony> locked one. Then you have to introduce further mechanisms to
Anthony> overcome this. But no lock, implies no problem and no further
Anthony> complexity.

I believe that this is the case when one asks for
serializability. The set should not be interrupted. An
interesting question for folks to ponder (as well as myself)
is what Informix does when you mix a user requesting
serializability with say a user requesting consistent
reads. What happens with lock conflicts.

Anthony Mandic

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

Pablo Sanchez wrote:

> ... and of course you end up with crappier performance with
> the row at a time vs set based approach.

Yeah, tell me about it (unless you're Jeff Wong, in
which case shut the hell up for once!).

> I was doing a bench for a customer and they were doing some
> nightly processing... looking at their code, I saw that one
> section was taking 2.5 hours. I rewrote it and that same
> section took a couple of minutes.
>
> What was the magic? Converted their row at a time logic to
> set based logic.

I have to put up with a 3rd party product on one of
the servers I administer that manages the department
employees. Its payroll processing takes over 7 hours
now just to process 4000 employees. Guess what its
doing? Yep, row at a time between a client and the
server. They're trying to tell me that the network
is too slow etc. etc. etc. Part of the problem is
that its I/O bound on its updates (the payroll
transaction table has 11 indexes on it!!! Can anyone
say NORMALISATION?). I'm trying to get them to
review their schema design and rewrite their code
as a stored procedure that runs on the server. It
should run in well under an hour. Of course this
means that its no longer compatible with the other
databases they support. But what the hell. The other
strange thing it does is selects for update. Why?
I don't know, it runs in single user mode. No real
need to generate 60,000 redundant locks.

-am

Anthony Mandic

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

Pablo Sanchez wrote:

>
> > Anthony Mandic writes:
> Anthony>
> >> As far as Pablo's comments, I saw bait, and that silly smile with a
> >> baloon on his head.
> Anthony>
> Anthony> Yeah, the pretzelhead fools everyone.
>
> It's always the last thing they see before BOOM! :-)

Those soiled nappies can make a real mess,


beats a wet towel anyday ;-)

-am

Anthony Mandic

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

David Williams wrote:
>
> Anthony Mandic writes:
> >David Williams wrote:
> >
> >> It think he means say 10 detail rows, in the screen array the user
> >> updates 5 , deletes 2, inserts 3. What SQL do you generate?
> >
> > I remember when I started of years ago and I used to do this
> > (until I learnt better). Well, since its SQL you're generating
> > you should be able to do it a row (and lock) at a time.
> >
> But what if the user wants to be able to undo all the changes they
> made since they entered the screen?

Er? Perhaps I didn't make myself clear enough. I'm not advocating
that you do the changes immediately. I should have said after the
user commits, you process the changes. Doing it concurrently is a
nightmare. Of course, there's no easy way to roll back after the
commit.

> Users goes into screen, deletes 20 rows and then want to undo
> everything. If you commit after each delete you cannot undo it,
> if you don;t commit you hold locks...

[snip]

> >> b) Someone does a kill -9 on your process when it has open locks?
> >
> > Same thing. You can clear it. Its not that big a deal.
> >
> It is.. I've had users complain the locks are held and call the system
> CRAP. The users demand a tool to clear the locks. Then they
>
> a) say why can't the application clear the locks automatically. If
> informix does I don't see why your application can't.

My last application did.

> b) how do DBA indentify stale locks.

Check against existing user connections. I had a stored procedure
to do this. If the user responsible for the lock isn't connected
or has a different process id (or whatever identifier you'd use)
then the lock is stale. My app would have done the checking if the
user reconnected, otherwise priveledged users had another app that
would manage it. Now of this is really that hard to do and there
wasn't much need for DBA intervention. Although I'll admit I used
to monitor the number of locks from time to time to check how busy
the users really were, etc.

> c) DBAs delete the wrong users locks...then things get really
> interesting when two users get the ability to update the same rows.
> They both hit the "DO IT" key at different times and one user
> "loses" their updates. Then you get support calls when a user
> complains of bugs in the application as it "failed to update the
> data and gives no warning". You can't reproduce the bug and users
> say they can't reproduce it on demand as "it only happens
> occasionally".

Tisk, tisk. This would happen if it were just deleted manually.
The way I used to do it was thru a stored procedure that did all
the correct checking. (Either that or I'd run the lock checking
app myself.)

> d) DBA's say "why do we have to bother clearing out locks each time
> we reboot Online? (This from a site with one crash per day for 6
> months before Informix supplied a bug fix).

Data consistency/integrity. End users could pick up where they
left off by finding their last lock. The app also stored current
info on the client side (via files). This minimised data loss
(critical when its a paperless office. Data came in over the
phone).

> >> YOUR CLIENT PROGRAM CANNOT GUARANTEE LOCKS ARE RELEASED IF A CLIENT
> >> PROGRAM DIES.
> >
> > I wouldn't want it to. The 'soft lock' is a useful debugging tool
> > in this very case. How do you debug your client apps otherwise
> > if a user says "Oh, it just died.". Most users can't tell you
> > anything useful, so you have to rely on your wits. Knowing
> > where they were up to tells you where the app was when it died.
>
> So does Informix 4GL, with startlog() function, all errors are t
> trapped by the Informix runtime library and logged.

Good, thats the correct approach. Except that it only relates
to the server side of things. You'd have to rely on core dumps
(if on UNIX) and/or a debugger to debug a client app. Note that
there are two basic scenarios, the app failing or the app working
but doing the wrong thing. Which is harder to resolve?

> >> Agreed, have a little as possibly in the transaction.
> >
> > Do you mean have little data or just keep the transaction to as
> > short as it should be?
> Both, update as few rows and columns as possible and avoid selects
> within a transaction.

Yes to the latter. I think the former really depends on the business
model. Sometimes it just can't be helped (or can it?).

-am

David Williams

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

In article <347E2D...@agd.nsw.gov.au>, Anthony Mandic

<no_s...@agd.nsw.gov.au> writes
>David Williams wrote:
>>
>> Anthony Mandic writes:
>> >David Williams wrote:
>> >
>> >> It think he means say 10 detail rows, in the screen array the user
>> >> updates 5 , deletes 2, inserts 3. What SQL do you generate?
>> >
>> > I remember when I started of years ago and I used to do this
>> > (until I learnt better). Well, since its SQL you're generating
>> > you should be able to do it a row (and lock) at a time.
>> >
>> But what if the user wants to be able to undo all the changes they
>> made since they entered the screen?
>
> Er? Perhaps I didn't make myself clear enough. I'm not advocating
> that you do the changes immediately. I should have said after the
> user commits, you process the changes. Doing it concurrently is a
> nightmare. Of course, there's no easy way to roll back after the
> commit.
>
So if the user deletes 1000 rows, where do you store them. Not in the
screen array because they are visible....

what if the user deletes, then inserts a new row in the same entry in
the array?


>> Users goes into screen, deletes 20 rows and then want to undo
>> everything. If you commit after each delete you cannot undo it,
>> if you don;t commit you hold locks...
>

>[snip]


>
>> >> b) Someone does a kill -9 on your process when it has open locks?
>> >
>> > Same thing. You can clear it. Its not that big a deal.
>> >
>> It is.. I've had users complain the locks are held and call the system
>> CRAP. The users demand a tool to clear the locks. Then they
>>
>> a) say why can't the application clear the locks automatically. If
>> informix does I don't see why your application can't.
>

> My last application did.


>
>> b) how do DBA indentify stale locks.
>

> Check against existing user connections. I had a stored procedure

How? Session ids can wrap and be reused...


> to do this. If the user responsible for the lock isn't connected
> or has a different process id (or whatever identifier you'd use)
> then the lock is stale. My app would have done the checking if the

Process ids wrap. User ids are now reliable since user can have
>1 connection....only the server can deceide for certain if a lock is
stale...

> user reconnected, otherwise priveledged users had another app that
> would manage it. Now of this is really that hard to do and there
> wasn't much need for DBA intervention. Although I'll admit I used
> to monitor the number of locks from time to time to check how busy
> the users really were, etc.
>

>> c) DBAs delete the wrong users locks...then things get really
>> interesting when two users get the ability to update the same rows.
>> They both hit the "DO IT" key at different times and one user
>> "loses" their updates. Then you get support calls when a user
>> complains of bugs in the application as it "failed to update the
>> data and gives no warning". You can't reproduce the bug and users
>> say they can't reproduce it on demand as "it only happens
>> occasionally".
>

> Tisk, tisk. This would happen if it were just deleted manually.
> The way I used to do it was thru a stored procedure that did all
> the correct checking. (Either that or I'd run the lock checking
> app myself.)
>

>> d) DBA's say "why do we have to bother clearing out locks each time
>> we reboot Online? (This from a site with one crash per day for 6
>> months before Informix supplied a bug fix).
>

> Data consistency/integrity. End users could pick up where they
> left off by finding their last lock. The app also stored current
> info on the client side (via files). This minimised data loss
> (critical when its a paperless office. Data came in over the
> phone).

the client cannot gurantee to remember about stale lock, what happens
if they power supply fails on the client (or the cleaner pulls the
plug to plug in her Hooever - it has happened).


>
>> >> YOUR CLIENT PROGRAM CANNOT GUARANTEE LOCKS ARE RELEASED IF A
CLIENT
>> >> PROGRAM DIES.
>> >
>> > I wouldn't want it to. The 'soft lock' is a useful debugging
tool
>> > in this very case. How do you debug your client apps
otherwise
>> > if a user says "Oh, it just died.". Most users can't tell you
>> > anything useful, so you have to rely on your wits. Knowing
>> > where they were up to tells you where the app was when it
died.
>>

Debug logs,screen messages.

>> So does Informix 4GL, with startlog() function, all errors are t
>> trapped by the Informix runtime library and logged.
>

> Good, thats the correct approach. Except that it only relates
> to the server side of things. You'd have to rely on core dumps
> (if on UNIX) and/or a debugger to debug a client app. Note that
> there are two basic scenarios, the app failing or the app working
> but doing the wrong thing. Which is harder to resolve?
>

Either is easy with debug logs. Switch on, get a trace, debug it.
Most debugging can be narrowed down to one screen operation by the
user, plaster with debug and that 's it.

>> >> Agreed, have a little as possibly in the transaction.
>> >
>> > Do you mean have little data or just keep the transaction to as
>> > short as it should be?
>> Both, update as few rows and columns as possible and avoid selects
>> within a transaction.
>

> Yes to the latter. I think the former really depends on the business
> model. Sometimes it just can't be helped (or can it?).

Rows can, columns can (prepare updates) but I usually do tables rather
then columns.

>
>-am

--
David Williams

Anthony Mandic

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

Jean-Marc van Leerdam wrote:

> Anthony Mandic wrote:
>
> > Only if that lock was held for longer than it ought to have been.
>
> Regardless, I think one should NOT be troubled by things being done to
> records that are of no concern to that user. With PLL you can (and
> will) be troubled in such a way.

Except in the case where one user has one row locked with RLL


and another users want a summery of all rows including the

locked one. Then you have to introduce further mechanisms to

overcome this. But no lock, implies no problem and no further

complexity.

> I think, even in a set-based thinking environment, it should not be
> the database that causes these problems. I strongly support the

> availability of RLL (even if sometimes PLL would be a performance
> advantage, I still want the option of RLL without cumbersome tricks or
> fake identifiers).

I agree.

> > These should be fast. If there are a lot of them in one hit, I'd
> > consider rethinking the approach used to design the app that
> > does this. The implication here is that the app may be far too
> > complex, thus being more vulnerable to problems. Keeping it
> > simple never hurts.
>
> Just 'blaming' the apps complexity is no real solution too, I think we
> should see the apps complexity and way of operating as a given.
> Just because a RDBMS acts on sets and likes set-like operations, that
> doesn't mean the customer does the same or should be trained to do so
> (that sets us back some 30 years, when we still wanted to adapt the
> users to the programs instead of the other way around).

I think at this point I'd start to think of the meaning of
relational. You could select one row initially and then
sets of related rows.

> IMHO a RDBMS should let the user operate on rows, completely
> independent of the status and concurrent use of 'nearby' rows by other
> users.
> That implies RLL in my opinion.
>
> Also, in a set-based thinking environment: why is some element
> 'closer' to another element than another element, and why does
> modifying one element influence the success of modifications to SOME
> other elements but not to ALL other elements?
> Elements in a set should IMO be independent objects.

I'd think relational again. Thus elements in a set are
independant but related.

-am

Anthony Mandic

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

David Williams wrote:
>
> Anthony Mandic writes
> >David Williams wrote:
> >
> >> But what if the user wants to be able to undo all the changes they
> >> made since they entered the screen?
> >
> > Er? Perhaps I didn't make myself clear enough. I'm not advocating
> > that you do the changes immediately. I should have said after the
> > user commits, you process the changes. Doing it concurrently is a
> > nightmare. Of course, there's no easy way to roll back after the
> > commit.
> >
> So if the user deletes 1000 rows, where do you store them. Not in the
> screen array because they are visible....
>
> what if the user deletes, then inserts a new row in the same entry in
> the array?

Well, now we're entering the territory of screen/form display
design. You'd have to decide whether you'd want to display the
deleted rows by having them marked as deleted or not. Then the user
would be at liberty to undelete or alter etc. A delete/insert
is really an update - however this depends on the design of
the key(s) and the business rules. In a audited system you may
not be allowed to physically delete a row once its been
added, you can only mark it as inactive (or whatever) and
record details like who and when.

> >> b) how do DBA indentify stale locks.
> >
> > Check against existing user connections. I had a stored procedure
> How? Session ids can wrap and be reused...

True, but you'd have to have to have a very active system for
them to do so so quickly. But you have more than one id to
help - the OS's process id and the server's user process id.
What's the likelihood of them both being the same ones again for
the same user? I'd never seen the problem come up. But this isn't
to say that it might happen.

> > Data consistency/integrity. End users could pick up where they
> > left off by finding their last lock. The app also stored current
> > info on the client side (via files). This minimised data loss
> > (critical when its a paperless office. Data came in over the
> > phone).
> the client cannot gurantee to remember about stale lock, what happens
> if they power supply fails on the client (or the cleaner pulls the
> plug to plug in her Hooever - it has happened).

Yep, thats what was being guarded against. The app could have
been termed overly retentive for this reason. It had to be
as failsafe as possible. If a connection failed the app was
smart enough to be able to reconnection when possible and
continue. If it died it could recover from its client-side
created files. It didn't give up without a fight. But of course,
this didn't mean that we became complacent.

> Debug logs,screen messages.

Yep, anything and everything.

> >> So does Informix 4GL, with startlog() function, all errors are t
> >> trapped by the Informix runtime library and logged.
> >
> > Good, thats the correct approach. Except that it only relates
> > to the server side of things. You'd have to rely on core dumps
> > (if on UNIX) and/or a debugger to debug a client app. Note that
> > there are two basic scenarios, the app failing or the app working
> > but doing the wrong thing. Which is harder to resolve?
> >
> Either is easy with debug logs. Switch on, get a trace, debug it.
> Most debugging can be narrowed down to one screen operation by the
> user, plaster with debug and that 's it.

This is a point in time issue. But you have to work back thru
the app to work out how it got into that state and how the
data became incorrect. Often times it was as the result of
some hairly written code in a module made by a programmer
who either didn't have a proper grasp of what was required or
didn't care.

Of course, we've drifted off topic now. But while we're here,
how many others have designed apps to guard against problems
and what measures did they take?

-am

Jean-Marc van Leerdam

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

Pablo Sanchez <pa...@sgi.com> wrote:

>>>>>> "Gary" == Gary Kuever <gku...@ix.netcom_remove_this_.com> writes:
>Gary>
>Gary> 2 - On the minus side of row locking, it encourages developers to continue
>Gary> with a record oriented mentality instead of a set mentality. An example is
>Gary> Oracle's row_id. I've seen this on every Oracle project, i.e. the shortcut
>Gary> is taken instead of thinking the set operation through properly.
>Gary>

>... and of course you end up with crappier performance with


>the row at a time vs set based approach.

>I was doing a bench for a customer and they were doing some


>nightly processing... looking at their code, I saw that one
>section was taking 2.5 hours. I rewrote it and that same
>section took a couple of minutes.

>What was the magic? Converted their row at a time logic to
>set based logic.

Sure, that is the way to go. But in this set based logic it still
would be nice if just the involved members of the set get locked when
updating the set, and not all other members in the same table that
just happen to be in the neighbourhood.
That is the point RLL fans try to make, not that they want to process
10,000 rows in a 1,000,000 row table one at a time.
If I update all rows with an odd key, I do not want to block others
that are only interested in even keys (for example).

I think we actually don't disagree, it's just that some people just
want RLL because of their way of thinking (and it's that mindset that
the PLL defenders are arguing against in this and the previous
thread), but some others see valid arguments in favor of RLL in a well
designed application (which IMO are not refuted by the PLL defenders
in the current threads, better yet: which are agreed on).

In my opinion, to summarize the current status quo:

1. We all are against a row-at-a-time approach to application design
2. We all see PLL schemes perform better than RLL schemes
(not going into the reasons why...)
3. We all see RLL advantages in that it gives a better relational/
setwise granularity with less unintended dependencies between rows
in a table.

That will conclude my contribution to this thread (unless ... ;-)

Tim Eyres

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

Pablo Sanchez wrote:
>
> >>>>> "David" == David Williams <d...@smooth1.demon.co.uk> writes:
> David>
> >> Informix... 24,000 transactions per minute of type C
> >> Sybase..... 34,000 transactions per minute of type C
> >>
> David> These can't have been from clients both updating rows
> David> on the same page at the same time..
>
> Of course they are... see my point?
>
> Now mind you, I believe that Informix will publish numbers
> greater than those above at some point and so will Sybase
> and so will Oracle... see what I'm getting to?

That its all a load of marketing hype? to put it politely....

Tim Eyres

luc van der veurst

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

Scott C. Gray <ickysp...@voicenet.com> wrote:
: Derek Meyer wrote:
: >
: > Sybase has plans to introduce row level locking in version 12
: > because applications actually do need it.

: Actually, I could be wrong, but I believe the RLL release is
: slated to be 11.9 (but, then again, that could be the
: Java VM release I'm thinking of).

Release 11.9 contains RLL and was given to a limited number of
software developers. That's what I heard at the ISUG/Powersoft
conference in Vienna. It wasn't sure yet what the number of the
release will be once it becomes GA, so that could be 12. Of
course the number doesn't matter, as long as the RLL feature that
some people are waiting for is there :-).

I think that the final release is scheduled for the first
quarter (or half, don't remember) of 1998.

It will be interesting to do a survey after x number of months
to see how many current sybase-sql-server/ase users have
started to use the feature.

Since the server's internal page structure has to be changed to
use RLL, the data has to be restructured. This means that we
probably will only implement RLL where we think it will benefit
our applications.


Luc.

______________________________________________________________________
Luc Van der Veurst ISUG Enhancements Co-Chair
Academic Hospital, VUB http://www.isug.com
Laarbeeklaan 101, 1090 Brussels, Belgium
32 - 2 477 69 80 Next ISUG Conference :
lu...@az.vub.ac.be Sydney, Australia March 1998

David Williams

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

In article <yut4t4z...@mew.corp.sgi.com>, Pablo Sanchez
<pa...@sgi.com> writes
>>>>>> "Steve" == Steve Cogorno <cog...@netcom.com> writes:
>Steve>
>Steve> You have to admit that page locks *are* more efficient than row locks. So
>Steve> for performance reasons, you may not want to use a row lock all the time.
>Steve> "Data page only" is a combination between the two: the server locks
>entire
>Steve> pages, but INDEX pages get row-level locks.
>Steve> --
>
>Hence the beauty that Informix and Oracle offer: choice.
>Not that I agree that for row-level vs. page-level it's
>necessary (for properly tuned app's) but choice is
>important.
>
>Since I slammed Sybase on dirty reads I'll slam Informix and
>Oracle for not having enough knobs to tune shared memory:
>
> o partitionable data buffer
> o binding objects to said partitions
>
>see? I'm equal opportunity slammer!!!

Online 7.3

> * Memory resident tables


but I still question why it is needed, surelt LRU is best.

I can pin a table/index but remember

a) most database accesses go to large tables
b) pinning a table/index will mean most of the memory is wasted
as generally only a small percentage of rows are accessed within
a given time period and the memory is better used caching pages
that will be reused.

--
David Williams

ro...@candle.pha.pa.us

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

In comp.databases.informix Anthony Mandic <no_s...@agd.nsw.gov.au> wrote:
: Joel Garry wrote:
: No, I've been implying that "read for update" or "select with lock"

: or whatever its called is the wrong approach. The read/select
: should not be part of the real transaction. They could be held
: indefinitely, in theory. The real tansaction is the
: insert/update/delete.
: These should be fast. If there are a lot of them in one hit, I'd

: consider rethinking the approach used to design the app that
: does this. The implication here is that the app may be far too
: complex, thus being more vulnerable to problems. Keeping it
: simple never hurts.

Let's suppose an order-entry app, with a customer table that has
row-level locking, and an order table with page-level locking.

Why can't the app do a SELECT FOR UPDATE on the customer table for the
requested customer, do a non-locking SELECT on the order table, then
UPDATES on the order table, then release the lock on the customer table?

This would seem to be the best of both worlds, with row-level locking
overhead only on the table that needs it, and it is kept while the user
is browsing the order table.

Can't do this without the capability of row-level locking, and the nice
thing is you can do RLL only on the tables that need it.

How are people locking the rows while people are browsing if they use
PLL? My guess is they are using the external lock mechanisms mentioned,
like lock table with lock entries.


--
Bruce Momjian | 830 Blythe Avenue
ro...@candle.pha.pa.us | Drexel Hill, Pennsylvania 19026
+ If your life is a hard drive, | (610) 353-9879(w)
+ Christ can be your backup. | (610) 853-3000(h)

Thomas Kyte

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

On Thu, 27 Nov 1997 19:15:40 +1000, Anthony Mandic <no_s...@agd.nsw.gov.au>
wrote:

>Pablo Sanchez wrote:
>
>> Since I slammed Sybase on dirty reads I'll slam Informix and
>> Oracle for not having enough knobs to tune shared memory:
>>
>> o partitionable data buffer
>> o binding objects to said partitions
>>
>> see? I'm equal opportunity slammer!!!

Oracle8 added named buffer caches and the ability to bind objects
(tables/indexes/partitions/clusters/etc) to them.

Because schema objects are referenced with varying usage patterns their cache
behavior may be quite different. Multiple buffer pools enable you to address
these differences. A “keep” buffer pool can be used to maintain an object in the
buffer cache, and a “recycle” buffer pool can be used to prevent an object from
taking up unnecessary space in the cache. If an object is allocated to a cache
then all blocks from that object are placed in that cache. A default cache is
always maintained for objects which have not been assigned to one of the
buffer pools.

Each buffer pool in Oracle8 is comprised of a number of working sets. A
different number of sets can be allocated for each buffer pool. All sets use the
same LRU replacement policy.

>
> It would have been more effective if you'd crossposted
> then :-)
>
>-am


Thomas Kyte
tk...@us.oracle.com
Oracle Government
Bethesda MD

http://govt.us.oracle.com/ -- downloadable utilities

----------------------------------------------------------------------------
Opinions are mine and do not necessarily reflect those of Oracle Corporation

Anti-Anti Spam Msg: if you want an answer emailed to you,
you have to make it easy to get email to you. Any bounced
email will be treated the same way i treat SPAM-- I delete it.

David Williams

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

In article <347E6F...@agd.nsw.gov.au>, Anthony Mandic

<no_s...@agd.nsw.gov.au> writes
>David Williams wrote:
> Yep, thats what was being guarded against. The app could have
> been termed overly retentive for this reason. It had to be
> as failsafe as possible. If a connection failed the app was
> smart enough to be able to reconnection when possible and
> continue. If it died it could recover from its client-side
> created files. It didn't give up without a fight. But of course,
> this didn't mean that we became complacent.
>
And this is the CHEAPEST application that will perform the task?
Remember most business custoemrs want cheap system...(every supplier
says that there system will work!!)


>> Debug logs,screen messages.
>
> Yep, anything and everything.
>
>> >> So does Informix 4GL, with startlog() function, all errors are t
>> >> trapped by the Informix runtime library and logged.
>> >
>> > Good, thats the correct approach. Except that it only relates
>> > to the server side of things. You'd have to rely on core dumps
>> > (if on UNIX) and/or a debugger to debug a client app. Note that
>> > there are two basic scenarios, the app failing or the app working
>> > but doing the wrong thing. Which is harder to resolve?
>> >
>> Either is easy with debug logs. Switch on, get a trace, debug it.
>> Most debugging can be narrowed down to one screen operation by the
>> user, plaster with debug and that 's it.
>
> This is a point in time issue. But you have to work back thru
> the app to work out how it got into that state and how the
> data became incorrect. Often times it was as the result of
> some hairly written code in a module made by a programmer
> who either didn't have a proper grasp of what was required or
> didn't care.
>
Which you have to do anyway, what's your point?

> Of course, we've drifted off topic now. But while we're here,
> how many others have designed apps to guard against problems
> and what measures did they take?
>

No the app but the database.

- unique indexes
- not null
- we would have constriants if I had my way..

>-am

--
David Williams

Pablo Sanchez

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

>>>>> "Tim" == Tim Eyres <tim....@genedata.com> writes:
Tim>
Tim> That its all a load of marketing hype? to put it politely....
Tim>

Well, in short, yes.

Pablo Sanchez

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

>>>>> "David" == David Williams <d...@smooth1.demon.co.uk> writes:
David>
David>
David> Online 7.3

I thought my class was on 7.3 and it wasn't available
there... are ya sure? You have a web page I can read about
for configuration and so forth... like to see what you do in
your config file.

btw, Thomas of Oracle mailed/posted(?) that Oracle 8
will/has this.

>> * Memory resident tables
David>
David>
David> but I still question why it is needed, surelt LRU is best.

Well, like always, it depends... if the object is only to be
used once, sure, LRU is definitely best. But if the object
is frequently accessed then pinning it would be the key.

David> I can pin a table/index but remember
David>
David> a) most database accesses go to large tables
David> b) pinning a table/index will mean most of the memory is wasted
David> as generally only a small percentage of rows are accessed within
David> a given time period and the memory is better used caching pages
David> that will be reused.

again it really depends on the nature of your app.
Obviously you don't want to pin all your tables...

Pablo Sanchez

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

>>>>> "Thomas" == Thomas Kyte <tk...@us.oracle.com> writes:
Thomas>
Thomas> Oracle8 added named buffer caches and the ability to bind objects
Thomas> (tables/indexes/partitions/clusters/etc) to them.

Excellent.

Pablo Sanchez

unread,
Nov 28, 1997, 3:00:00 AM11/28/97
to

>>>>> "Jean-Marc" == Jean-Marc van Leerdam <Jean-Marc....@xxremovexx.ingbank.com> writes:
Jean-Marc>
Jean-Marc> In my opinion, to summarize the current status quo:
Jean-Marc>
Jean-Marc> 1. We all are against a row-at-a-time approach to application design
Jean-Marc> 2. We all see PLL schemes perform better than RLL schemes
Jean-Marc> (not going into the reasons why...)
Jean-Marc> 3. We all see RLL advantages in that it gives a better relational/
Jean-Marc> setwise granularity with less unintended dependencies between rows
Jean-Marc> in a table.
Jean-Marc>

I concur with your points one and two (and of course) not three.
That is, the type of granularity that you are describing
isn't fine enough such that RLL would give you a benefit.
If we were talking about something like database locking vs
PLL/RLL then I'd concur with item number three.

Now, to restate my position, I have *never* said that RLL is
not good, what I have said is that RLL vs. PLL isn't worth
all the hype that the RDBMS vendors would like you to
believe. That's my point... I argue this position using a
normalized OLTP application: TPC-C's. With the TPC-C's,
it's shown that an RDBMS doing page-level locking can
outperform an RDBMS doing row-level locking (at this point
in time*).

When it comes to performance, what I look at are two issues:

Bandwidth vs. Latency

Bandwidth is pretty easy to solve: faster network, wise use
of stored procedures, scalable hardware, appropriate
distribution of data (be it 'data' and/or 'log) on
disk... anyway, you can Throw Hardware At It [tm] for this
type of solution.

Latency is the tougher one to deal with. Whether it's how
the RDBMS implements its own mutex'ing or the application
holding locks too long or row-at-a-time processing or the
O/S not scaling with a system call's invocation ... you get
my drift... it's a tough cookie to solve because it requires
code changes: app, O/S, RDBMS

(For the record, our group is responsible for the porting of
Informix/Oracle/Sybase to our platform... I get to see the
problems involved with all three RDBMS... hence, why I can
have no opinion on which is "best").

If we have a problem where the application is holding locks
for too long, it's not going to be solved with RLL. It's
going to be solved by educating the app developers and
re-writing those transactions. Because as those who are
familiar with performance and tuning, it's a process of
refinement.

Live with this motto and you'll live a long and wonderful life:

For an OLTP app, short transactions are a Good Thing [tm].

* - TPC-C's are somewhat worthless, IMHO, because they
really are just snapshots in time and they aren't too
realistic:

o using only about 300MB out of a 4gig drive
o a >20 *minute* checkpoint

Take a look at some executive summaries in www.tpc.org

Scott C. Gray

unread,
Nov 30, 1997, 3:00:00 AM11/30/97
to

David Williams wrote:
>
> In article <yuten43...@mew.corp.sgi.com>, Pablo Sanchez
> <pa...@sgi.com> writes
> >>>>>> "Greg" == Greg <gr...@cyberramp.net> writes:
> >Greg>
> >Greg>
> >Greg> Same reason practically noone writes assembler anymore. Works great if
> >Greg> you futz with it enough. No one can afford to futz with it enough
> >Greg> though. It's just not economic to do so. Unfortunately for Sybase,
> >Greg> they're learning this the hard way.
> >Greg>
> >
> >... let's make it easy... why is Sybase's TPC-C's higher
> >than Informix? They're both OLTP? As I said, row-level vs
> >page-level is hype.
>
> page level locks give HIGHER PERFORMANCE yes BItUT the reduce
> concurrency and hence LOWER RELIABILITY.

Be careful with your wording...locking granularity has nothing at
all to do with reliability (indeed, table locking vs. column locking
has the same reliability when properly implemented). It is all
simply a matter of concurrency and efficiency.

> It's a tradeoff and
> most of the time people go for reliability over a little more
> performance. 90% of performance tuning is
>
> a) reducing disk I/O by only selecting/updating the minimum
> amount of data.
> b) database schema / index usage.

Of course, it could be argued that b) is a subset of a). But
I digress :)

-scott

--
Scott C. Gray gr...@voicenet.com "my keybard is
brken"
Sybase Professional Services scott...@sybase.com
http://www.voicenet.com/~gray/sqsh.html

David Williams

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

In article <3481D92B...@voicenet.com>, "Scott C. Gray"
<ickysp...@voicenet.com> writes

>> page level locks give HIGHER PERFORMANCE yes BItUT the reduce
>> concurrency and hence LOWER RELIABILITY.
>
>Be careful with your wording...locking granularity has nothing at
>all to do with reliability (indeed, table locking vs. column locking
>has the same reliability when properly implemented). It is all
>simply a matter of concurrency and efficiency.

Most applications produce an error when a lock error occurs.

1. Queries fails
2. Database modifications produce an error and the transaction is
rolled back (few applications retry).

Hence APPLICATION RELIABLITIY is reduce.

>-scott
>

--
David Williams

Scott C. Gray

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

David Williams wrote:
>
> In article <3481D92B...@voicenet.com>, "Scott C. Gray"
> <ickysp...@voicenet.com> writes
> >> page level locks give HIGHER PERFORMANCE yes BItUT the reduce
> >> concurrency and hence LOWER RELIABILITY.
> >
> >Be careful with your wording...locking granularity has nothing at
> >all to do with reliability (indeed, table locking vs. column locking
> >has the same reliability when properly implemented). It is all
> >simply a matter of concurrency and efficiency.
>
> Most applications produce an error when a lock error occurs.
>

I assume that you are referring to deadlocks. While it is true that
deadlocks may be reduced in a row vs. page locking environment, they
can still occur (don't forget we are only talking about locking...
not versioning--which can also be done in a page locked environment).

It is foolish for any application developer to fail to check for such
conditions (although, in my experience, almost everyone does).
Thus, for a properly coded application, page locking does not
reduce application reliability one bit...it just increases the
likely hood that the proper error detection/recovery code will
be executed (although, in a properly developed environment, it
is possible to eliminate deadlocks anyway).

> 1. Queries fails
> 2. Database modifications produce an error and the transaction is
> rolled back (few applications retry).
>
> Hence APPLICATION RELIABLITIY is reduce.

Once again...be careful about your wording. It seems that you
are producing a false analogy along the lines of:

1. Some conditions cause the database to generate an
error.
2. Many applications don't check for these errors

The database is not working properly.

Do not equate sloppy application development with deficiencies
in the tools being used to develop them.

Anthony Mandic

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

David Williams wrote:

> And this is the CHEAPEST application that will perform the task?
> Remember most business custoemrs want cheap system...(every supplier
> says that there system will work!!)

Well, you do get what you pay for. The app I was describing
was an inhouse app. You seldom get the source to commercial
apps or access to the developers and immediate fixes.

> >> Either is easy with debug logs. Switch on, get a trace, debug it.
> >> Most debugging can be narrowed down to one screen operation by the
> >> user, plaster with debug and that 's it.
> >

> > This is a point in time issue. But you have to work back thru
> > the app to work out how it got into that state and how the
> > data became incorrect. Often times it was as the result of
> > some hairly written code in a module made by a programmer
> > who either didn't have a proper grasp of what was required or
> > didn't care.
> >
> Which you have to do anyway, what's your point?

Thats right. My point was that debugging isn't always as
trivial as checking for a dereferenced pointer with an if
statement.

> > Of course, we've drifted off topic now. But while we're here,
> > how many others have designed apps to guard against problems
> > and what measures did they take?
> >
> No the app but the database.
>
> - unique indexes
> - not null
> - we would have constriants if I had my way..

Funny you should mention this. I've just been involved
with the setting up of another 3rd party system. Had a
quick look at the schema. Almost every field allows null.
The table creation script was probably churned out auto-
matically, judging by the formatting. I'll have to have
a word to the developers tomorrow.

-am

Anthony Mandic

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

ro...@candle.pha.pa.us wrote:

> Let's suppose an order-entry app, with a customer table that has
> row-level locking, and an order table with page-level locking.
>
> Why can't the app do a SELECT FOR UPDATE on the customer table for the
> requested customer, do a non-locking SELECT on the order table, then
> UPDATES on the order table, then release the lock on the customer table?
>
> This would seem to be the best of both worlds, with row-level locking
> overhead only on the table that needs it, and it is kept while the user
> is browsing the order table.
>
> Can't do this without the capability of row-level locking, and the nice
> thing is you can do RLL only on the tables that need it.
>
> How are people locking the rows while people are browsing if they use
> PLL? My guess is they are using the external lock mechanisms mentioned,
> like lock table with lock entries.

Yes, any approach you take is possible. How effectively it
works needs to be measured. I'd been thinking about the
issues raised in this thread over the weekend, and came
up with a new server-side locking model. Basically, it
takes the idea of the "soft lock" to create an "heirarchical
object lock". All elements participating in this lock would
be defined either thru the table creation statement or an
explicit statement to identify which column participates
(in other words, how its usually defined). Only one
lock need be generated, that being for the top element
of the heirarchy (in the above example, on the customer).
All customer-related data with a matching relational
key automatically participates in the lock. The lock
placement syntax would be the same, e.g. "select for update".
The lock would need to be explicitly released, however.
The advantage here is that only one lock is ever required.
This does away with physical row and page locks, but adds
an overhead in that the server will need to check keys
select for locking against existing locked keys (in other
words, about the same overhead as existing lock tests).
The lock granularity can be enhanced by specifying which
fields are likely to be affect for a table that participates
in this locking model. This means that you could do column
locking (someone mentioned finer locking than row level in
one post in this thread).

What does everyone else think? Is this feasable?

-am

Keith Willis

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

On Thu, 27 Nov 1997 19:31:31 +1000, Anthony Mandic
<no_s...@agd.nsw.gov.au> wrote:

>Michael Segel wrote:
>>
>> Anthony Mandic wrote:
>>
>> > I feel tempted to state the same here, but I'd rather ask why
>> > some of Informix's top programmers have jumped ship and joined
>> > Orable? (According to what I've read recently in the trade
>> > papers)
>>
>> Who said that they were the *top* programmers. :-)
>
> The trade papers (not that they know anything 99.99% of the time).
>
>> If I remember correctly, Informix bought Illustra around that time, if
>> you catch my drift.
>
> Ah, the net closes in.

Quoting from "computing" magazine in the UK:

"In a humiliating double blow, Informix has admitted overestimating
revenues by $236 million over a three year period...

...The company has now admitted that in 1996 its turnover was over
$200 million less than the $939 million orignally claimed.

In fact, Informix made a loss of $73.6 million rather than a profit of
$97.8 million in 1996...."

Sounds like they are up the proverbial creek...

----------------------------------------------------------------------
The above message reflects my own views, not those of Hewlett Packard.
When emailing me, please note that there is no '.junk' in my address.

Michael Segel

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to


Keith Willis wrote:

> Quoting from "computing" magazine in the UK:
>

Ahh, you're on the wrong side of the pond ... ;-)

> Sounds like they are up the proverbial creek...
>

Not!
You have to understand that there is about 1 billion USD in seed capital from
angels available to hi-tech start ups.

Unlike Sybase, Informix does have some leading/bleeding edge technologies.
(The buyout of Illustra, which helped cause that loss for the earlier periods)
This technology is not in question. Its the delivery and support of the
technology is.
These problems are easier to fix than technical ones.

BTW, the whole page locks vs row level locking is really a moot point.
Locking granularity is important when building applications. The better the
granularity,
the less likelyhood of problems. (deadlocks and such.) However, decreased
granularity has its costs hence it is not the default method of locking.

But to defend Sybase by saying that row level locking isn't important, is
pretty damn silly. Admit it and move on. Look, I had a freind who worked for
Informix, then went to Sybase. He had no problems admitting that there were
some features that Informix had which Sybase didn't, so why can't you?

-Mike


Pablo Sanchez

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

>>>>> "Michael" == Michael Segel <Mi...@NOSPAM.King.of.MyDomain.NOSPAM.Segel.com> writes:
Michael>
Michael> But to defend Sybase by saying that row level
Michael> locking isn't important, is pretty damn
Michael> silly.

I have yet to see you refute any of my points regarding row
level vs page level locking. For a well written
application, and isn't that what you strive for?, it simply
doesn't matter.

To simply lob the above and not back it up with any
technical merit is pretty damn silly.

Michael Segel

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to Pablo Sanchez


Pablo Sanchez wrote:

>
>
> I have yet to see you refute any of my points regarding row
> level vs page level locking. For a well written
> application, and isn't that what you strive for?, it simply
> doesn't matter.
>

Gee, well, I guess you talked me in to this.

Try writing a hotel reservation system for a major national hotel chain.
Read: Hyatt or something like that. ;-)

Try booking a reservation in a major city like New York.
(Multiple properties, multiple nights and multiple rates.)
Oh, and lets not forget the multiple DSRs (Dumb Shit Receptionists) who are
trying to access this system..... ;-)

Now I am sure that you can do a work around, to COMPENSATE for page
locking, but why? Row level locking is a much simpler solution. Of course it would be nice to
allow for locking within a nested transaction, and even allow for nesting of transactions.
But hey that's beyond the immeadiate question.

Lets also look at other applications where row level locking is important.

Hmmm, OK, how about in the financial industry?
In a mortgage generation application ?

Or how about in a travel industry application like a flight reservation system?
Or in the telecommunication industry?
Or in a realtime inventory control / POS system?

True, you can write these with page level locks, however you won't get the performance,
and you will have to write extra code to compensate.

Sorry Pablo, defend Sybase on another point. Surely there are things that Sybase does that
Informix can't right? I mean, take Oracle for instance. They have some neat indexing
techniques that Informix lacks.

-Just food for thought.

Pablo Sanchez

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

>>>>> "Gary" == Gary L Burnore <gburnore> writes:
Gary>
Gary> Tell the airlines that row-level locking isn't
Gary> important. If the entire seating table for a flight
Gary> was locked for one attendent to assign you a seat,
Gary> it'd be caos.

I doubt very much that SABRE places a logical lock on a
seat. It more than likely updates the row with a "lock
out".

Gary> Row level locking is quite important in some
Gary> instances.

As I've said, if you look at the TPC-C you can see that for
a well behaved application there isn't a problem. How do
you refute that Sybase currently has a higher tpmC value
than Informix? After all, TPC-C's are OLTP and Informix has
row level locking. Answer that for me...

Gary> I also agree with his statement that to
Gary> defend Sybase by saying that YOU don't think row level
Gary> locking is important is silly.

That's cool, you can agree that that earth is flat too. How
about just answering the TPC-C question above for me.

Pablo Sanchez

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to Mi...@segal.com

>>>>> "Michael" == Michael Segel <Mi...@NOSPAM.King.of.MyDomain.NOSPAM.Segel.com> writes:
Michael>
Michael> [ examples deleted ]
Michael>
Michael> True, you can write these with page level locks,
Michael> however you won't get the performance, and you will
Michael> have to write extra code to compensate.

What do you base your assertion that "you won't get the
performance"? The applications you talked about are all
applications that require a highly tuned application in
order to get excellent performance.

If you look at the TPC-C's (as I've mentioned, before)
you'll see another highly tuned (OLTP) application. If I
compare the *currently* submitted numbers for Sybase
vs Informix I see the following:

Sybase..... 39,469 tpmC
Informix... 24,309 tpmC

That's a 62% performance increase by using Sybase. Now I
grant you that I believe that we'll see Informix
Inc. publishing even higher numbers but that's not my
point. My point is:

For a finely tuned application, row level locking
does *not* matter.

The numbers don't support it.

I don't see you producing any factual data to prove your
point. I'd be more than happy to look at any factual and
objective data you produce.

As for writing "extra code to compensate", this is a
subjective remark. I think the code difference between the
two is minor enough...

Michael> Sorry Pablo, defend Sybase on another point. Surely
Michael> there are things that Sybase does that Informix
Michael> can't right? I mean, take Oracle for instance. They
Michael> have some neat indexing techniques that Informix
Michael> lacks.

Honestly, this has nothing to do with Sybase. It has to do
with marketing hype as I've said all along. You've
swallowed the row-level is better than page-level pill. It
sorta makes sense but unfortunately the facts don't support
it. It's marketing hype.

* TPC information can be gleaned from www.tpc.org

Pablo Sanchez

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

>>>>> "Gary" == Gary L Burnore <gburnore> writes:
Gary>
Gary> You DOUBT? Meaning you don't know?

Nope, I don't know SABRE nor "The Other" application well
enough to say. If the system crashes when I'm booking a
flight, though, I don't lose my seat therefore I believe my
assertion is correct.

Gary> Figures.

[ ignored ]

Gary> :
Gary> :Gary> Row level locking is quite important in some
Gary> :Gary> instances.
Gary> :
Gary> :As I've said, if you look at the TPC-C you can see that for
Gary> :a well behaved application there isn't a problem. How do
Gary> :you refute that Sybase currently has a higher tpmC value
Gary> :than Informix? After all, TPC-C's are OLTP and Informix has
Gary> :row level locking. Answer that for me...
Gary>
Gary> I'm not refuting it. I'm saying it's not based on
Gary> Sybases lack of row-level locking.

I can't parse the above point. Would you elaborate?

Gary> Look, why not just admit that you like Sybase better
Gary> and that's that.

Sybase is fine and so is Informix and as a matter of fact, I
like Oracle's architecture too. They all have problems and
good points. As a developer it's important to exploit the
good points rather than the bad one's. But I stray...

Gary> It's obvious by your posts that's what's true here.
Gary> You're not being unbiased about it at _ALL_.

What is obvious is that row level locking doesn't buy you
want you *think* it's buying you when you have a finely
tuned application. All the examples pointed out to me by
folks have been cases of finely tuned apps. See my post to
Mike Segal on finely tuned app's.

Facts don't show bias. I'm willing to listen to any facts
that you have to support your claims. As of yet, I've seen
none.

Pablo Sanchez

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

>>>>> "Gary" == Gary L Burnore <gburnore> writes:
Gary>
Gary> On 01 Dec 1997 10:31:26 -0700, Pablo Sanchez <pa...@sgi.com> wrote:
Gary> :
Gary> : Sybase..... 39,469 tpmC
Gary> : Informix... 24,309 tpmC
Gary> :
Gary> :That's a 62% performance increase by using Sybase. Now I
Gary> :grant you that I believe that we'll see Informix
Gary> :Inc. publishing even higher numbers but that's not my
Gary> :point.
Gary>
Gary> Of course it's not your point. Your point is obvious.

For some reason you think I have some hidden agenda. I
can't control what you think. I'm only posting some facts
and I've yet to see any facts from you besides your
blatherings. How about some real data to support your
points rather than your silly sophomoric attacks?

Don't go the way of Boris...

Michael Segel

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to


Pablo Sanchez wrote:

> Nope, I don't know SABRE nor "The Other" application well
> enough to say. If the system crashes when I'm booking a
> flight, though, I don't lose my seat therefore I believe my
> assertion is correct.
>

No, you don't have your seat until you get a confirmation number.Its that simple.
(Confirmation numbers occur after the transaction is completed.
Its obvious you haven't written a hotel reservation system.

My whole point is that this thread is a waste of breathe.
Conceputaly the finer the granularity of locks acheived, the better the application
will
behave and the easier it is to implement an OLTP application.

Now, as to TPC benchmarks, why don't you print the configurations used. :-)
(Yes Virginia, I have done benchmarking and I know that everyone cheats:-)

> Gary> I'm not refuting it. I'm saying it's not based on
> Gary> Sybases lack of row-level locking.
>
> I can't parse the above point. Would you elaborate?
>

He's saying that there are other things that Sybase does well, inspite of not
havingrow level locking. Would you care to elaborate. It was the same point I was
trying to
make. Only you thought to say SNIP! :-)

> Sybase is fine and so is Informix and as a matter of fact, I
> like Oracle's architecture too. They all have problems and
> good points. As a developer it's important to exploit the
> good points rather than the bad one's. But I stray...
>

No, this whole thread is a stray. That's the point I was trying to make since
itssilly to say that page level locking is better than row level locking.


> What is obvious is that row level locking doesn't buy you
> want you *think* it's buying you when you have a finely
> tuned application. All the examples pointed out to me by
> folks have been cases of finely tuned apps. See my post to
> Mike Segal on finely tuned app's.
>

Uhmm, well no. Scale your application up. Increase the number of users.Then lets
talk about performance. Please understand that IMHO, the best place to tune a
system is at the app level, not the db engine. Of course, a well tuned engine is
important.


> Facts don't show bias. I'm willing to listen to any facts
> that you have to support your claims. As of yet, I've seen
> none.

Well, how many retail customers use Sybase? Reservation Systems?(SABRE is still
mainframe AFAIK)

That's the real test.


> --
> Pablo Sanchez | Ph # (650) 933.3812 Fax # (650) 933.2821
> pa...@sgi.com | Pg # (800) 930.5635 -or- pab...@pager.sgi.com
> -------------------------------------------------------------------------------
> I am accountable for my actions. http://reality.sgi.com/pablo [ /Sybase_FAQ ]

-Mikey


Pablo Sanchez

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

>>>>> "Michael" == Michael Segel <Mi...@NOSPAM.King.of.MyDomain.NOSPAM.Segel.com> writes:
Michael>
Michael> Conceputaly the finer the granularity of locks
Michael> acheived, the better the application will behave
Michael> and the easier it is to implement an OLTP
Michael> application.

and you ignore the fact that there's added overhead with row
level locking. When you consider the *whole* picture, you
soon see that it doesn't make a difference.

Michael> Now, as to TPC benchmarks, why don't you print the configurations used. :-)

They're lengthy... but I've told people where to look for
them (and I'll do it again: www.tpc.org). But if you
inisist on the other number, for the Sybase number that
beats Informix, we have $94.18/tpmC vs $139.04/tpmC for
Informix. It's cheaper to use Sybase and you get 60+%
better performance.

Michael> (Yes Virginia, I have done benchmarking and I know
Michael> that everyone cheats:-)

There are FDR's for the TPC-C's. Tell me on these two
benches where people "cheated".

I grant you that TPC-C's aren't true real world but it's the
best that we have for a normalized application. Not
something silly like "well my hotel reservation system ran
great using product X and sucked on product Y" All that
proves is that someone may not have known how to write an
application. Big deal.

Michael> He's saying that there are other things that Sybase
Michael> does well, inspite of not havingrow level
Michael> locking. Would you care to elaborate. It was the
Michael> same point I was trying to make. Only you thought
Michael> to say SNIP! :-)

The issue has never been whether Sybase does row level or
page level locking. I'm just talking about the issue
itself on row level vs page level locking. You and he keep
wanting to try and drag Sybase into this whole thread. I
could care less about Sybase at this point. I only bring in
Sybase because it's the only RDBMS at this point in time
doing page level locking. And it's proven with the -C's
that it's not an issue. But you seem to conveniently ignore
the fact that Sybase currently beats the pants off of
Informix.

Assuming that there was some cheating, are you saying that
there was sufficient cheating to warrant a 60+% performance
increase? I don't think so.

Michael> No, this whole thread is a stray. That's the point I was trying to make since
Michael> itssilly to say that page level locking is better than row level locking.

No it's not silly and you have yet to refute why it is so.
You have "said" that it is so, but you don't have any
*facts* to support your assertion. You can keep saying it
but until you can provide some facts to support your
assertion it's hard to give any credibility to your statement.

I have provided you with TPC-C's that prove that it's not an
issue. That's my point that it's not an issue. You have
done nothing to show contrary. As you say, show me.

Michael> Uhmm, well no. Scale your application up. Increase
Michael> the number of users.

Okay, the TPC-C has for Sybase has 32,000+ users. How many
do you want? Of course they're mux'd using Tuxedo but
nonetheless the engine is supporting 32,000 users.

Michael> Then lets talk about performance.

Okay, 39,469tpmC's. Is that good enough for you? 60+%
greater than Informix's current number. Yup, let's talk
about performance now.

Michael> Please understand that IMHO, the best place to tune a
Michael> system is at the app level, not the db engine. Of course, a well tuned engine is
Michael> important.

I disagree. The best place to tune a system is in its
entirety. You have to consider:

o The application
o The RDBMS
o The hardware

What good is it to tune your application when you are CPU
bound, when you are disk bound, when you haven't set
PDQPRIORITY... you have to look at *everything* not just a
single component. It's a waste of time otherwise.

Michael> Well, how many retail customers use Sybase?

FEDEX, United Airlines, ....

Michael> Reservation Systems?(SABRE is still
Michael> mainframe AFAIK)

I don't believe it's so. I think that they are using
Oracle. United Airline's ticketing uses a mainframe and
they are looking at getting off of it. NDA prevents me from
further discussion on this topic.

David Williams

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

In article <yutra7x...@mew.corp.sgi.com>, Pablo Sanchez
<pa...@sgi.com> writes

>>>>>> "Michael" == Michael Segel <Mi...@NOSPAM.King.of.MyDomain.NOSPAM.Segel.com
>> writes:
>Michael>
>Michael> [ examples deleted ]
>Michael>
>Michael> True, you can write these with page level locks,
>Michael> however you won't get the performance, and you will
>Michael> have to write extra code to compensate.
>
>What do you base your assertion that "you won't get the
>performance"? The applications you talked about are all
>applications that require a highly tuned application in
>order to get excellent performance.
>

Imagine 1 page with 2 rows one it.

1. Page level locking

User A updates row 1
At the same time user B updates row 2. User B has to wait for
User B to commit.

2. With row level locking
User A updates row 1
User B updates row 2 (with no waiting).

How can 1 be faster than 2 if user B has to wait?

>If you look at the TPC-C's (as I've mentioned, before)
>you'll see another highly tuned (OLTP) application. If I
>compare the *currently* submitted numbers for Sybase
>vs Informix I see the following:
>
> Sybase..... 39,469 tpmC
> Informix... 24,309 tpmC
>

>That's a 62% performance increase by using Sybase. Now I

>grant you that I believe that we'll see Informix

>Inc. publishing even higher numbers but that's not my

>point. My point is:
>
I checked the results, these figure come from different machines.
So Sybase runs their TPC benchmarks on larger hardware..so?

> For a finely tuned application, row level locking
> does *not* matter.

Lock granularity is still a problem. Why do UNIX kernels lock
individual files and data structures rather than having one large
kernel lock? Try reading

UNIX Systems for Modern Architectures
Sysmmetric Multiprocessing and Caching for Kernel Programmers
by Curt Schimmel
Addison-Wesley
ISBN 0-201-63338-8

"If the processes in the application job mix use seperate kernel
resources, each of whose data structures are protected by separate
locks, then these processes will not contend for the same locks and
will be able to run simultanueously on different CPUs, whether they
are in user or kernel mode"

Translate to databases and internal data structures.


>
>The numbers don't support it.

>
>I don't see you producing any factual data to prove your
>point. I'd be more than happy to look at any factual and
>objective data you produce.
>
>As for writing "extra code to compensate", this is a
>subjective remark. I think the code difference between the
>two is minor enough...
>
>Michael> Sorry Pablo, defend Sybase on another point. Surely
>Michael> there are things that Sybase does that Informix
>Michael> can't right? I mean, take Oracle for instance. They
>Michael> have some neat indexing techniques that Informix
>Michael> lacks.
>
>Honestly, this has nothing to do with Sybase. It has to do
>with marketing hype as I've said all along. You've
>swallowed the row-level is better than page-level pill. It
>sorta makes sense but unfortunately the facts don't support
>it. It's marketing hype.
>
>* TPC information can be gleaned from www.tpc.org

>--
>Pablo Sanchez | Ph # (650) 933.3812 Fax # (650) 933.2821
>pa...@sgi.com | Pg # (800) 930.5635 -or- pab...@pager.sgi.com
>-------------------------------------------------------------------------------
>I am accountable for my actions. http://reality.sgi.com/pablo [ /Sybase_FAQ ]

--
David Williams

David Williams

unread,
Dec 1, 1997, 3:00:00 AM12/1/97
to

In article <3482318D...@voicenet.com>, "Scott C. Gray"

<ickysp...@voicenet.com> writes
>David Williams wrote:
>>
>> In article <3481D92B...@voicenet.com>, "Scott C. Gray"
>> <ickysp...@voicenet.com> writes
>> >> page level locks give HIGHER PERFORMANCE yes BItUT the reduce
>> >> concurrency and hence LOWER RELIABILITY.
>> >
>> >Be careful with your wording...locking granularity has nothing at
>> >all to do with reliability (indeed, table locking vs. column locking
>> >has the same reliability when properly implemented). It is all
>> >simply a matter of concurrency and efficiency.
>>
>> Most applications produce an error when a lock error occurs.
>>
>
>I assume that you are referring to deadlocks. While it is true that
>deadlocks may be reduced in a row vs. page locking environment, they
>can still occur (don't forget we are only talking about locking...
>not versioning--which can also be done in a page locked environment).
>
>It is foolish for any application developer to fail to check for such
>conditions (although, in my experience, almost everyone does).
>Thus, for a properly coded application, page locking does not
>reduce application reliability one bit...it just increases the
>likely hood that the proper error detection/recovery code will
>be executed (although, in a properly developed environment, it
>is possible to eliminate deadlocks anyway).
>
Correct, the user will receive an error message saying this item
cannot be updated an hence this "the application failed with an error,
it is not as reliable as other applications".

>> 1. Queries fails
>> 2. Database modifications produce an error and the transaction is
>> rolled back (few applications retry).
>>
>> Hence APPLICATION RELIABLITIY is reduce.
>
>Once again...be careful about your wording. It seems that you
>are producing a false analogy along the lines of:
>
> 1. Some conditions cause the database to generate an
> error.
> 2. Many applications don't check for these errors
>
> The database is not working properly.
>

No, but I'm syaing the user sees more error messages and updates
which failed to complete.

--
David Williams

It is loading more messages.
0 new messages