Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Stored procedure structure in RDBMS using Lock Manager for transaction isolation

429 views
Skip to first unread message

Daniel Loth

unread,
Jun 25, 2021, 9:58:38 AM6/25/21
to
Hi everyone,

There have been a few interesting discussions lately concerning MVCC.

Out of those discussions came some interesting conversation concerning the ideal approach for structuring a procedure in systems that use lock managers as their means of guaranteeing the Isolation in ACID.

Derek in particular described his approach and in doing so expressed two principles:

1. Only acquire exclusive (write) locks if the transaction will potentially succeed.

2. Never acquire exclusive locks if embarking on the transaction is futile (or put another way, don't start what you absolutely cannot finish).

Consistent with those principles, the ideal was described as:

1. A validate block - Where we 'look before we leap' (to borrow that expression).
In this block we are reading only. We can use the READ COMMITTED isolation level, thereby acquiring and releasing shared locks in quick succession.
No shared locks are held after the conclusion of the statement (i.e., the select query in this case).

2. A transaction block - Where we check again and, if the conditions are still right (i.e., other users might have changed the data), we can proceed to do our work and then commit the transaction.
In this block we acquire write locks, and these locks are held for the remainder of the transaction (until we commit or rollback).

While checking again, this time we simultaneously acquire an exclusive lock.
In the code I'm sharing below, I use the 'with (updlock)' hint as described in the other discussions I refer to above.

---

My request:

I've written such a stored procedure, and am hoping for feedback as to how closely it aligns with that description.

The code I share below is designed to run on SQL Server. I've tested it on SQL Server 2016, but I've kept the example simple and it should run on most versions.

It was written to run in a database configured such that 'snapshot isolation' is disabled and 'read committed snapshot isolation' is disabled.

That is to say that this code has been written for use with a RDBMS that relies on a Lock Manager to guarantee isolation of transactions, not MVCC.

Finally, I accept that there are a number of views on matters such as MVCC, its advocates, and so on.
But it'd be great if the discussion remained clean and free of vitriol. If for no other reason than keeping it safe for viewing at work.

Cheers,
Daniel

---

A link to view the code on GitHub: https://gist.github.com/DanielLoth/6a8777dd978b8d00dbe5d6fa880fed59

---

The code:

create procedure dbo.AddAttendance_tr
@OrganisationId int,
@PersonId int,
@AttendanceDate date
as

set nocount on;

------------------------------------------------------------
-- Validation block
--
-- Determine whether or not a row already exists
-- for the key {OrganisationId, PersonId, AttendanceDate}.
------------------------------------------------------------
set transaction isolation level read committed;

declare @RowExists bit = 0;

select @RowExists = 1
from dbo.Attendance
where OrganisationId = @OrganisationId
and PersonId = @PersonId
and AttendanceDate = @AttendanceDate;


if (@RowExists = 1) return 0;


------------------------------------------------------------
-- Transaction block
--
-- At this point we know there's a chance that we will
-- succeed in inserting a row.
-- However we might conceivably be racing another session
-- that is calling this procedure for the same key
-- {OrganisationId, PersonId, AttendanceDate}.
--
-- Within a transaction:
-- 1. Re-execute the validation query, but this time acquire
-- an update lock while doing so (using hint 'with (updlock)').
-- 2. If the row is still absent, we can now safely insert
-- our new row.
-- This is due to the selection of an appropriate transaction
-- isolation level - in this case serializable - that
-- guarantees no other transaction can insert this row
-- while we hold our update lock.
-- With the isolation level set to serializable, the session
-- that we are running our query in (using hint 'with (updlock)'
-- has acquired a KEY lock from the lock manager.
-- This KEY lock ensures that no other transactions running
-- within other sessions can insert a row with this primary
-- key (the Isolation in ACID).
------------------------------------------------------------
set transaction isolation level serializable;

begin transaction;

select @RowExists = 1
from dbo.Attendance
with (updlock)
where OrganisationId = @OrganisationId
and PersonId = @PersonId
and AttendanceDate = @AttendanceDate;


if (@RowExists = 0)
begin
insert into dbo.Attendance (OrganisationId, PersonId, AttendanceDate)
values (@OrganisationId, @PersonId, @AttendanceDate);

commit;
end
else
begin
rollback;
end

return 0;

Derek Ignatius Asirvadem

unread,
Jun 26, 2021, 6:09:42 PM6/26/21
to
> On Friday, 25 June 2021 at 23:58:38 UTC+10, daniel wrote:
>
> My request

Just a bit of context. This is a “theory-only” forum that is dominated by “theoreticians”. None of who have progressed the science since Codd, 1970. For many years, I fought an ongoing battle, nay, a war, and they would argue like freaks, hair-splitting and other dishonest methods (anti-science). Over time, they have all fallen by the wayside, and thankfully the insane arguments have stopped.

Nicola is the single academic who is crossing the Great Dividing Range, the chasm between the contrived world of academia, and the real world. I stand for Codd; the /Relational Model/; science; theory+practice (not theory alone; not practice alone). Point is, I have 10 years history with Nicola, and a willingness to answer his questions, to help him cross that chasm.

What you are trying to do is excellent, to come up with a sp template for OLTP Transactions. Of course we have that, it has not changed since 1993, and it is commercial. We do not sell it stand-alone, it is given only with education, to SQL Developers. Usually heavy OO/ORM/OOP types who have already written filth that locks itself silly, or suffers data integrity problems (more visible in MV-non-CC systems), and who are now [after I rewrite their database, producing V2] forced to comply with OLTP Standards, that we supply, so that they produce V2 sps.

Therefore, the best engagement is for Xero to engage us, and obtain the full complement in the usual way. As you may be aware, per Australian law, what we sell to customers, cannot be provided free to others (it would make the sale price fraudulent).

Therefore, please understand, I was answering questions for an academic (who cannot use it commercially), way more than I am obliged to, with the focus on understanding the real Ordinary Locking system (instead of the Straw Man description that the Stonebraker cult declare it to be) vs the real MV-non-CC (instead of the hysterical myth that the cult declare it to be). The goal here is objective truth, not any opinion. Science, not politics. Academia is stuck in politics (market and promote 1960’s theory) and mythology, devoid of science. The market is stuck in a different set of imperatives (Chinese mentality: no standards; fast delivery; use imbeciles; cheep cheep). Academia in turn write for that ignorant market. Both sides are FORTY YEARS behind the technology.; the actual platforms.

Please understand, I can answer questions; I can argue the science (as distinct from the filth that passes for “science”), but I can’t give away the shop.

With that intro and caveat in mind ...

> Finally, I accept that there are a number of views on matters such as MVCC, its advocates, and so on.

Politics.

Science does not need advocates, science is not opinion, science is not consensus or citations. I stand for pre-Modern Science. Science is knowledge; certainty. The Four Laws of Thought, Causality; Composition.
__ https://www.softwaregems.com.au/Documents/Defence%20of%20the%20Church/Modernism/A%20Aristotle%20A_1.pdf
__ https://www.softwaregems.com.au/Documents/Defence%20of%20the%20Church/Modernism/G%20Glossary%20p161.pdf

> But it'd be great if the discussion remained clean and free of vitriol. If for no other reason than keeping it safe for viewing at work.

This forum existed before you were born. It is still one of the few fora that is not moderated; not censored: free speech rules. It is not open for a newcomer to dictate terms. The concept of a “safe space” to be safely insane is not provided here.

If statements on a forum, whether it be science or “science” or opinion, scares you, then by that very fact, you cannot practice science.

> It was written to run in a database configured such that 'snapshot isolation' is disabled and 'read committed snapshot isolation' is disabled.

That means the entire set of resources, that are required to host MV-non-CC (“snapshot isolation”) on the database, have bee ADDED. That is the massive resources to host the MV part, because the Ordinary Locking that pre-existed in the server, that provides the CC part, continues to be used. But this connection has disabled its participation in the massive additional MV resources.

> That is to say that this code has been written for use with a RDBMS that relies on a Lock Manager to guarantee isolation of transactions, not MVCC.

Not just “guarantee isolation of transactions”, which is one requirement of ACID, but the server provides all four requirements for ACID. But but but ACID is not merely a server-side concept, there are requirements on the client side, which is what is being played out here, explained here.

It does not consist of walking up to the db and inserting one row correctly. The whole system has to be ACID, ie. comply with ACID requirements. At best that means the Open Architecture/OLTP Standard. Unchanged from the 1960’s (truth does not change). At worst it means complying with ACID partiallyand no OLTP. (Not complying with ACID deems the system outside this class of discussion).

> I've written such a stored procedure, and am hoping for feedback as to how closely it aligns with that description.

Ok. You have done a good job of reading some of my answers, and figuring out what the sp template should be, at least for a start. The problem is, you have picked up fragments, not the whole (which is understandable if you understand my intro).

You said RDBMS. That means it has to be Relational. The “keys” are hopeless, non-Relational, but let’s overlook that.

Make the following changes and re-submit:
1. Attendance is dependent on Person, and Person is dependent on organisation.
__ Create table Organisation, PK ( OrganisationId )
__ Create table Person, PK ( OrganisationId, PersonId )

> insert into dbo.Attendance (OrganisationId, PersonId, AttendanceDate)

2. It is an error, to attempt insertion of an Attendance row, for a Person that does not exist.

3. It is an error, to attempt insertion of an Attendance row, for a Organisation that does not exist.

> The code I share below is designed to run on SQL Server. I've tested it on SQL Server 2016, but I've kept the example simple and it should run on most versions.

IOW, it is a bit too simple to expose the issues that are relevant, and thus a possible solution.

> Finally, I accept that there are a number of views on matters such as MVCC, its advocates, and so on.

Yes. A mountain of filth that never gets resolved.

==========

> -- At this point we know there's a chance that we will
> -- succeed in inserting a row.

You know no such thing.
Knowledge is certainty, not chance; not probability.
A chance cannot be known, something known is not chance.

If and when you:
__ ensure that Organisation[ OrganisationId ] exists,
__ and Person[ OrganisationId, PersonId ] exists,
__ and Attendance[ OrganisationId, PersonId, AttendanceDate ] does not exist
then, and only then, there is a /good/ chance that the insert will succeed.

> -- Within a transaction:
> -- 1. Re-execute the validation query, but this time acquire

“Re-execute” and “again” are false.
When you get to contemplating the client side, there will be a VALIDATION structure. That to is not a “Re-execute” or “again” or a “triplicate”. If your car has been impounded, you phone the various pounds first, to find out which pound it is held in. You do not go to all the pounds. You must not attempt something that will fail. The only way to ensure that an attempt [within an Xact] will not fail, is to check first, outside the Xact. Therefore they are both demanded, one is not a duplicate; not an “again”; not a “re-“ of the other. When you go to the one pound that has the car, it is not “re-going” to the pound.

> -- Within a transaction:
> -- 1. Re-execute the validation query, but this time acquire
> -- an update lock while doing so

Further, the [“this time” do something else that you did not do the first time], proves that the two blocks are not “duplicate”; “again”; “Re-”.

> -- This KEY lock ensures that no other transactions running

Whether it is a Key lock; or Update Lock; or Intent Lock, is a platform-specific thing (other platforms will do the same thing, but have different locks, or different names for similar locks). In the Sybase & MS case, yes, a Key Lock is distinctly more efficient, more “thinking ahead”. So for our purpose, understanding and writing code for an ACID RDBMS, do not concern yourself too much about the platform-specific type of lock, do concern yourself THAT it is locked and that it is some form of Exclusive Lock.

Generically, it is an Intent lock, that you have found, and after the insert it is an Exclusive lock. Platform-specifically, it means SQL Server is ready and waiting to insert another row with that or similar Key, hence Key lock.

In Sybase it does yet another clever thing.

> -- This is due to the selection of an appropriate transaction
> -- isolation level - in this case serializable

No. After BEGIN TRAN, your ISOLATION LEVEL is set to SERIALIZABLE, the only one that is “suitable” for Transactions. You have no choice, there is no “selection”.

I will clarify the ACID definition in another post.

> guaranteeing the Isolation in ACID.

Again, that is the MV-non-CC mindset. Get rid of it. “Guaranteeing isolation” is not relevant, it is an idiotic concept, and only relevant for an idiot that first thinks he has a version of the database. Take that first idiocy away (it is a shared database, with just One Version of the Truth, One Version of Any Row), and the need for the second idiocy disappears. Stop thinking about a version of something that is removed from the reality of its existence (insanity), and start thinking about the reality of its existence (sanity).

Transaction Isolation is provided by an ACID compliant server, yes. It does not need a guarantee, and you should not rely on such a guarantee, you should rely on Transaction Isolation.

> -- Validation block

Change the SELECT to:

IF EXISTS (
____SELECT 1
________FROM dbo.Attendance
________WHERE OrganisationId = @OrganisationId
____________AND PersonId = @PersonId
____________AND AttendanceDate = @AttendanceDate;
____) RETURN 0

Further, a proper Error Message (“user defined”) should be raised. If you do not use RAISERROR and such messages, then you must have at least a pre-ordained set of RETURN_VALUES for all stroed procs (and Functions) that indicate success/failure to the caller. For this demonstration purpose, let’s use these RETURN_VALUES:
0 = Succeeded
7 = Row exists (that should not exist)
8 = Row does not exist (that should exist)
9 = Other failure

Therefore, in the SELECT above:
____) RETURN 7

> -- Transaction block

It is named the Execute block. The whole sp is the Transaction. And should be named as such Attendance_Add_tr.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 26, 2021, 6:25:08 PM6/26/21
to
> On Sunday, 27 June 2021 at 08:09:42 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Friday, 25 June 2021 at 23:58:38 UTC+10, daniel wrote:

> > -- This KEY lock ensures that no other transactions running
> Whether it is a Key lock; or Update Lock; or Intent Lock, is a platform-specific thing (other platforms will do the same thing, but have different locks, or different names for similar locks). In the Sybase & MS case, yes, a Key Lock is distinctly more efficient, more “thinking ahead”. So for our purpose, understanding and writing code for an ACID RDBMS, do not concern yourself too much about the platform-specific type of lock, do concern yourself THAT it is locked and that it is some form of Exclusive Lock.
>
> Generically, it is an Intent lock, that you have found, and after the insert it is an Exclusive lock. Platform-specifically, it means SQL Server is ready and waiting to insert another row with that or similar Key, hence Key lock.

The point is this. You are thinking bottom-up. That is plain wrong in any kind of software development exercise. But you are forced into thinking bottom-up in the MV-non-CC context, it is consequent to that insanity. Give up the MV-non-CC insanity, and the bottom-up thinking that is demanded can then be released, so that you can start thinking as a human being, as a s/w developer, about reality. Which means top down; hierarchy; composition.

Thus Key Lock or Intent Lock does not matter, what matters is that YOU have observed the reality of the database (as distinct from hysterically pretending that the database and the hundreds of concurrent users does not exist, that the only thing that exists is your version), and YOU have locked the resources that YOU plan to rely upon, top-down.

Cheers
Derek

Daniel Loth

unread,
Jun 26, 2021, 11:16:42 PM6/26/21
to
Hi Derek,

Thanks for responding.

---

> Therefore, the best engagement is for Xero to engage us, and obtain the full complement in the usual way. As you may be aware, per Australian law, what we sell to customers, cannot be provided free to others (it would make the sale price fraudulent).

I should clarify that I'm here in my own capacity, and not in my capacity as an employee.
Realistically, the only way I could obtain a thorough understanding of your methods as an individual is if they were published in a book.

The company I work for pursues agile software practices, emphasising ideals such as fast time-to-market, as do many of our competitors and just about any company with a Silicon Valley-like ethos.

I don't think the pursuit of agile software practices would be compatible with the conditions necessary to implement your ideal.
But I certainly think, in my capacity as an individual, that the knowledge that you've disclosed is valuable.

Having said that, I realise that 'Attendance' being the table might lead you to think 'Time and Attendance', a concept very reasonably associated with Xero's product offering.

Actually, in this case, attendance is a shooter being present at a recognised sporting shooters club on a given date. NSW AU legislation and regulations for shooting clubs mandate a register of attendees be kept. This is merely a list of people who attend club premises on a given day.

At present, this is not something that I've commercialised. I would be open to commercialising a club management application in the future, but honestly to date I haven't found the time.

---

> Please understand, I can answer questions; I can argue the science (as distinct from the filth that passes for “science”), but I can’t give away the shop.

Absolutely. That's why I was quite happy to put together the SQL to make the answer a 'Yes, this is broadly consistent' or 'No, this is not consistent' proposition.

---

> This forum existed before you were born. It is still one of the few fora that is not moderated; not censored: free speech rules. It is not open for a newcomer to dictate terms. The concept of a “safe space” to be safely insane is not provided here.

Fair enough. I was a little dishonest anyway when couching the reasons for this request.
In practice I'm an early-thirties Aussie, so there's very little that's going to offend my sensibilities. I was more concerned with deterring others from participating in the discussion.

---

> That means the entire set of resources, that are required to host MV-non-CC (“snapshot isolation”) on the database, have bee ADDED. That is the massive resources to host the MV part, because the Ordinary Locking that pre-existed in the server, that provides the CC part, continues to be used. But this connection has disabled its participation in the massive additional MV resources.

I'm afraid that can't be helped for my part. Microsoft added snapshot isolation years ago, but it is an opt-in feature.

I can say that resource utilisation is markedly different when MVCC is turned on. Specifically, MVCC results in a marked uptick in TempDB activity.

Can we work on the premise that when MVCC is not enabled in SQL Server then there is no penalty imposed?

---

> Not just “guarantee isolation of transactions”, which is one requirement of ACID, but the server provides all four requirements for ACID. But but but ACID is not merely a server-side concept, there are requirements on the client side, which is what is being played out here, explained here.

> It does not consist of walking up to the db and inserting one row correctly. The whole system has to be ACID, ie. comply with ACID requirements. At best that means the Open Architecture/OLTP Standard. Unchanged from the 1960’s (truth does not change). At worst it means complying with ACID partiallyand no OLTP. (Not complying with ACID deems the system outside this class of discussion).

Agreed. I've looked at your document about Open Architecture, and accept the assertion that encapsulating all transactions within the database as stored procedures is the only way to guarantee that all constraints hold true at all times.

In sharing the example code above I was hoping to keep this discussion quite focused.

I agree that the procedure initially provided would not even be called if another procedure, perhaps named 'Attendance_Get_tr', indicated that the attendance on a given date had already been recorded.

---

> Ok. You have done a good job of reading some of my answers, and figuring out what the sp template should be, at least for a start. The problem is, you have picked up fragments, not the whole (which is understandable if you understand my intro).

> You said RDBMS. That means it has to be Relational. The “keys” are hopeless, non-Relational, but let’s overlook that.

Hopefully the explanation above - that this is actually unrelated to 'Time and Attendance', but instead related to a requirement for sport shooting clubs to maintain a register of club attendees - changes your judgement concerning the key.

The full DDL for the Attendance table is:

create table dbo.Attendance (
OrganisationId int not null,
PersonId int not null,
AttendanceDate date not null,

constraint UC_Attendance_PK
primary key clustered (OrganisationId, PersonId, AttendanceDate),

constraint Person_Indicates_Attendance_FK
foreign key (OrganisationId, PersonId)
references dbo.Person (OrganisationId, PersonId)
);

---

> Make the following changes and re-submit:
> 1. Attendance is dependent on Person, and Person is dependent on organisation.
> __ Create table Organisation, PK ( OrganisationId )
> __ Create table Person, PK ( OrganisationId, PersonId )

Certainly. I'll share the additional DDL.

For ease of viewing, I've published all of the code below on GitHub too:
https://gist.github.com/DanielLoth/76d241515655e76cadddef6ed2d373aa

---

The full DDL for the Person table is:

create table dbo.Person (
OrganisationId int not null,
PersonId int not null,
GivenNames nvarchar(50) not null,
FamilyName nvarchar(50) not null,
CreatedDtm datetimeoffset(0) not null,
UpdatedDtm datetimeoffset(0) not null,

constraint UC_Person_PK
primary key clustered (OrganisationId, PersonId),

constraint Organisation_Hosts_Person_FK
foreign key (OrganisationId)
references dbo.Organisation (OrganisationId)
);

---

The full DDL for the Organisation table is:

create table dbo.Organisation (
OrganisationId int not null,
OrganisationName nvarchar(100) not null,
CreatedDtm datetimeoffset (0) not null,
UpdatedDtm datetimeoffset (0) not null,

constraint UC_Organisation_PK
primary key clustered (OrganisationId)
);

---

This personal project was actually designed in an effort to understand and incorporate your methodology. So you'll probably recognise a number of characteristics that you've touched on in various PDF documents shared on StackOverflow.

Now that I've shared these additional elements, does that change your feedback in any way?

---

Next, the stored procedures. This time I've included two: 'Attendance_Add_tr' and 'Attendance_Get_tr'.

---

create procedure dbo.Attendance_Get_tr
@OrganisationId int,
@PersonId int,
@AttendanceDate date
as

set nocount on;
set transaction isolation level read committed;

select OrganisationId, PersonId, AttendanceDate
from dbo.Attendance
where OrganisationId = @OrganisationId
and PersonId = @PersonId
and AttendanceDate = @AttendanceDate;

return 0;


---

create procedure dbo.Attendance_Add_tr
@OrganisationId int,
@PersonId int,
@AttendanceDate date
as

set nocount on;

------------------------------------------------------------
-- Validation block
------------------------------------------------------------
set transaction isolation level read committed;

if not exists (
select 1
from dbo.Person
where OrganisationId = @OrganisationId
and PersonId = @PersonId
)
begin
return 8; -- Person does not exist
end

if exists (
select 1
from dbo.Attendance
where OrganisationId = @OrganisationId
and PersonId = @PersonId
and AttendanceDate = @AttendanceDate
)
begin
return 7; -- Attendance on the given date has already been recorded.
end

------------------------------------------------------------
-- Execute block
------------------------------------------------------------
set transaction isolation level serializable;

begin transaction;

insert into dbo.Attendance (OrganisationId, PersonId, AttendanceDate)
select @OrganisationId, @PersonId, @AttendanceDate
where not exists (
select 1
from dbo.Attendance
with (updlock)
where OrganisationId = @OrganisationId
and PersonId = @PersonId
and AttendanceDate = @AttendanceDate
);

if @@ROWCOUNT <> 0
begin
commit;
end
else
begin
rollback;
end;

return 0;

---

Hopefully the code provided above fills in a few blanks and is closer to the mark.

Cheers,
Daniel

Derek Ignatius Asirvadem

unread,
Jun 27, 2021, 5:11:07 AM6/27/21
to
Daniel

> On Sunday, 27 June 2021 at 13:16:42 UTC+10, daniel wrote:

> Actually, in this case, attendance is a shooter being present at a recognised sporting shooters club on a given date. NSW AU legislation and regulations for shooting clubs mandate a register of attendees be kept. This is merely a list of people who attend club premises on a given day.

I am a shooter. NSW B grade. I resist A grade, you know where that goes.

> > That means the entire set of resources, that are required to host MV-non-CC (“snapshot isolation”) on the database, have bee ADDED. That is the massive resources to host the MV part, because the Ordinary Locking that pre-existed in the server, that provides the CC part, continues to be used. But this connection has disabled its participation in the massive additional MV resources.
> I'm afraid that can't be helped for my part. Microsoft added snapshot isolation years ago, but it is an opt-in feature.
>
> I can say that resource utilisation is markedly different when MVCC is turned on. Specifically, MVCC results in a marked uptick in TempDB activity.
>
> Can we work on the premise that when MVCC is not enabled in SQL Server then there is no penalty imposed?

Yeah, sure. Depends on whether you will benchmark this or not.
I was making the distinction:
1. Server level.
If MV-non-CC is implemented, a whole pile of resources need to be *ADDED*, allocated and configured. Hence the “uptick in tempdb usage”; CUP Usage; etc. It can’t be turned off, but it can be removed.

2. Connection level.
If [1] has been done, then within those limits, one can enable snapshot isolation.

> ---
> > Not just “guarantee isolation of transactions”, which is one requirement of ACID, but the server provides all four requirements for ACID. But but but ACID is not merely a server-side concept, there are requirements on the client side, which is what is being played out here, explained here.
>
> > It does not consist of walking up to the db and inserting one row correctly. The whole system has to be ACID, ie. comply with ACID requirements. At best that means the Open Architecture/OLTP Standard. Unchanged from the 1960’s (truth does not change). At worst it means complying with ACID partiallyand no OLTP. (Not complying with ACID deems the system outside this class of discussion).
> Agreed. I've looked at your document about Open Architecture, and accept the assertion that encapsulating all transactions within the database as stored procedures is the only way to guarantee that all constraints hold true at all times.

AND do not GRANT INSERT/UPDATE/DELETE to anyone
AND it provides the best processing speed; least contention;l best concurrency
(Yes, of course there is more, such as the db design.)

> I agree that the procedure initially provided would not even be called if another procedure, perhaps named 'Attendance_Get_tr', indicated that the attendance on a given date had already been recorded.

That is excellent. That is what the GUI would be calling (as an example). Even a simple SELECT, running as compiled and QT-ed code, runs faster that “dynamic SQL” in the client. Not to mention, the security hole is closed.

The Transaction suffixes are:
_tr - straight, self-contained transaction
_utr - utility [Modular Code block] called from a _tr
_btr - batch transaction (as detalied in the other two threads with Nicola)
- not a transaction

You can remove the suffix.

> ---
> > Ok. You have done a good job of reading some of my answers, and figuring out what the sp template should be, at least for a start. The problem is, you have picked up fragments, not the whole (which is understandable if you understand my intro).
>
> > You said RDBMS. That means it has to be Relational. The “keys” are hopeless, non-Relational, but let’s overlook that.
> Hopefully the explanation above - that this is actually unrelated to 'Time and Attendance', but instead related to a requirement for sport shooting clubs to maintain a register of club attendees - changes your judgement concerning the key.

The judgement cannot be changed, because the crime is real. The point is, we can overlook it for this purpose.

The NSW Firearm Licence requirement is 10 attendances per annum. The purpose is to ensure the licence and the guns are actually being used. The club has to record attendance and actual range practice (as opposed to signing the register, and having a yarn with the good old boys). We can skip that, and leave it as you have modelled.

> ---
> > Make the following changes and re-submit:
> > 1. Attendance is dependent on Person, and Person is dependent on organisation.
> > __ Create table Organisation, PK ( OrganisationId )
> > __ Create table Person, PK ( OrganisationId, PersonId )
>
> Certainly. I'll share the additional DDL.

Great work.

> For ease of viewing, I've published all of the code below on GitHub too:
> https://gist.github.com/DanielLoth/76d241515655e76cadddef6ed2d373aa

Since you are sharing it via a link to github, you can skip posting it here.

> ---
>
> This personal project was actually designed in an effort to understand and incorporate your methodology. So you'll probably recognise a number of characteristics that you've touched on in various PDF documents shared on StackOverflow.

Indeed. You are a good and faithful follower, you have picked up quite a lot.

> Now that I've shared these additional elements, does that change your feedback in any way?

Yes, great progress.

The thing is this. And both Nicola and you have this problem. Not because you are stupid, you are not, but due to the programming that passes for “education” that has been imposed on you. You have been trained to (a) think in terms of fragments, not integration, not atoms. And that means (b) bottom-up, instead of top-down. So you have noticed and copied things correctly, but the overall need, and the integration within the Atom, is not complete.

> ---
>
> create procedure dbo.Attendance_Get_tr
> @OrganisationId int,
> @PersonId int,
> @AttendanceDate date
> as
>
> set nocount on;
> set transaction isolation level read committed;
> select OrganisationId, PersonId, AttendanceDate
> from dbo.Attendance
> where OrganisationId = @OrganisationId
> and PersonId = @PersonId
> and AttendanceDate = @AttendanceDate;

IF @@ROWCOUNT != 1
____ RETURN 1 -- common not_exists
ELSE
___ RETURN 0

> ---
>
> create procedure dbo.Attendance_Add_tr
> @OrganisationId int,
> @PersonId int,
> @AttendanceDate date
> as
>
> set nocount on;
>
> ------------------------------------------------------------
> -- Validation block
> ------------------------------------------------------------
> set transaction isolation level read committed;

(No locks held.)

IF NOT EXISTS (
___ SELECT 1
_______ FROM Organisation
_______ WHERE OrganisationId = @OrganisationId
___ )
___ RETURN 9 -- Organisation does not exist

> select 1
> from dbo.Person
> where OrganisationId = @OrganisationId
> and PersonId = @PersonId
> )
> begin
> return 8; -- Person does not exist
> end
>
> if exists (
> select 1
> from dbo.Attendance
> where OrganisationId = @OrganisationId
> and PersonId = @PersonId
> and AttendanceDate = @AttendanceDate
> )
> begin
> return 7; -- Attendance on the given date has already been recorded.
> end
>
> ------------------------------------------------------------
> -- Execute block
> ------------------------------------------------------------
> set transaction isolation level serializable;

(Redundant, as explained, but harmless.)

> begin transaction;

The Validate block did not hold locks.
Now in the Execute block, you have to obtain Intent locks (HOLDLOCK), in the prescribed order, the hierarchy of Organisation; Person. This is the critical issue wrt eliminating various problems (that CAN be eliminated). Do:
__1 If not exists Organisation[ OrganisationId ] return 9
____ HOLDLOCK

__2 If not exists Person[ OrganisationId, PersonId ] return 8
____ HOLDLOCK

__3 If exists Attendance[ OrganisationId, PersonId, AttendanceDate ] return 7
____ HOLDLOCK

This is not a repeat of the Validate Block because you are now holding locks, on purpose. Forget Disneyland, we have a Lock Manager, we know its components, and we are engaging it, consciously.

The reason for this is, if there is a conflict with any other active Transaction, it will be blocked at the highest level in the tree, and NOT proceed further (eg. to the next statement). This is by design, desired. If you do not go AHA!, if you do not appreciate the brilliance of this, please contemplate, and then ask /specific/ questions.

Remember, dismiss the notion of “isolating the transaction” which is a consequence of the insane Stonebraker mantra. Love the world you are in. Embrace the Lock Manager, he is your friend. The Transaction is real, the rows are real, the changes are real, they cannot be isolated from reality. What we want to do here is deal with that, in the fastest way possible (minimum lock duration), and in the most sociable manner (least conflict, but if conflict encountered, block at the highest level, thereby holding the fewest locks, and affording the antagonist to do the same).

The mantra is hysterically stupid, I can’t believe that other academics accepted it. In the real world of real rows that maintain one version of the truth, that have real changes, writers occasionally block readers, and readers occasionally block writers. Nothing to be scared of.

Nicola, note how that Batch Xact fits here. It needs to lock only the highest row in the hierarchy, and that will prevent any other OLTP Xact from getting into the tree.

(Now you have a slightly different version of [3], with the SELECT & the existence check combined. If you have [3], of course, you don’t need the existence check here.)

> insert into dbo.Attendance (OrganisationId, PersonId, AttendanceDate)
> select @OrganisationId, @PersonId, @AttendanceDate
> where not exists (
> select 1
> from dbo.Attendance
> with (updlock)
> where OrganisationId = @OrganisationId
> and PersonId = @PersonId
> and AttendanceDate = @AttendanceDate
> );
>
> if @@ROWCOUNT <> 0

IF @@ROWCOUNT = 1 -- use a positive check; limited scope, in an IF

> begin
> commit;

RETURN 0

> end
> else
> begin
> rollback;

RETURN 6 -- insert failed
> end;

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 27, 2021, 10:24:27 PM6/27/21
to
Daniel

In the /No CASCADE in Commercial SQL .../ thread,
> On Monday, 28 June 2021 at 11:26:56 UTC+10, Derek Ignatius Asirvadem wrote:
>
> -- Transaction Processing --
> -- Transaction • ACID Properties --
> -- Transaction • ACID • Consistency --
> -----------------------------------------------------
> -- Transaction • ACID • Isolation --
> -----------------------------------------------------
>
> //The intermediate state of a transaction is invisible to [or isolated from] other transactions.
> As a result, transactions that run concurrently appear to be serialized.//
>
> ----------
>
> If you understand this, you will understand that there is only one level of depth in Transactions, either one Transaction or all Transactions in a system. The notion of “nested transactions” is hysterically absurd.

Therefore, for the next increment of the OLTP/Transaction/ACID sproc template, as the first item in the Validate Block, add:

IF @@TRANCOUNT > 0 OR @@TRANCHAINED
___ -- if xact is opened by caller (this xact would be nested), or if every verb is SET to be an xact
___ RETURN 99

Of course, we use “user defined” error messages, and RAISERROR, so that the client side is alarmed and wakes up properly. And a fixed set of ReturnValues. No need for that here, in the simple template context, where ReturnValues as indicators are enough, but be sure to put that into your server, if you want the full OLTP/Transaction/ACID context. Our Error Messages for this area are as follows. %1 is the sproc name:

>>>> Sybase Transact-SQL, the original, convert to MS Transact-SQL
EXEC sp_addmessage 20003, '%1!: is a transaction, which is Atomic. It has been called from within an open transaction, which would render it a non-transaction. That is not allowed.', "us_english", FALSE, "REPLACE"

EXEC sp_addmessage 20004, '%1!: A transaction has been opened by the caller (but is declared NOT to be open).', "us_english", FALSE, "REPLACE"

EXEC sp_addmessage 20005, '%1!: A transaction has NOT been opened by the caller (but is declared to be open).', "us_english", FALSE, "REPLACE"

EXEC sp_addmessage 20006, '%1!: is an utility transaction, it must be called from within a open transaction.', "us_english", FALSE, "REPLACE"
<<<<

That last one is of course for any [_utr], which checks that @@TRANCOUNT = 1, and which has no BEGIN TRAN. [_utr] code blocks exist for the purpose of Normalising code, aka Code Modularisation, aka DRY. This is not relevant to your template, but in the full OLTP/Transaction/ACID context, it must be known, and handled properly.

Cheers
Derek

Daniel Loth

unread,
Jun 29, 2021, 8:59:49 AM6/29/21
to
Hi Derek,

I've posted some updated code here incorporating your feedback: https://gist.github.com/DanielLoth/a272a601e38c9c6f2c002956b08d534e

Microsoft recommend using the 'throw' operation instead of the older 'raiserror'. So I've incorporated the notion of an error, but it's in its own function.
Transaction chaining - implicit transactions - is also a little different in MSSQL. So I've incorporated it as a 'set option' clause.

---

> The reason for this is, if there is a conflict with any other active Transaction, it will be blocked at the highest level in the tree, and NOT proceed further (eg. to the next statement). This is by design, desired. If you do not go AHA!, if you do not appreciate the brilliance of this, please contemplate, and then ask /specific/ questions.

It makes sense. It would remove the possibility of deadlock by ensuring that all locks are consistently acquired in the same order.

Also, even if for a moment there is contention (say, on dbo.Person in my example) it'll keep progressing immediately afterwards with a relatively good prospect of still succeeding.

Or if the circumstances have changed (e.g.: the transaction that blocked it actually removed the Person row), it'll quickly detect that the row is not there once unblocked, rollback, and then return control to the caller.

---

Question: What context are these two used in?

> EXEC sp_addmessage 20004, '%1!: A transaction has been opened by the caller (but is declared NOT to be open).', "us_english", FALSE, "REPLACE"
> EXEC sp_addmessage 20005, '%1!: A transaction has NOT been opened by the caller (but is declared to be open).', "us_english", FALSE, "REPLACE"

The other two made a lot of sense. But I'm not sure what you mean by 'declared to be'.

---

Cheers,
Daniel

Derek Ignatius Asirvadem

unread,
Jun 29, 2021, 11:11:07 AM6/29/21
to
Danno

> On Tuesday, 29 June 2021 at 22:59:49 UTC+10, Daniel Loth rote:
> Hi Derek,
>
> I've posted some updated code here incorporating your feedback: https://gist.github.com/DanielLoth/a272a601e38c9c6f2c002956b08d534e

Good work.

> Microsoft recommend using the 'throw' operation instead of the older 'raiserror'. So I've incorporated the notion of an error, but it's in its own function.

Horrible. MS has always had a special way of doing things, which is always more work; more code that can break; more things to go wrong; more things to administer. Have they taken away “user defined” error messages ? What happened to good old backward-compatibility.

> Transaction chaining - implicit transactions - is also a little different in MSSQL. So I've incorporated it as a 'set option' clause.

Perfect.

> ---
> > The reason for this is, if there is a conflict with any other active Transaction, it will be blocked at the highest level in the tree, and NOT proceed further (eg. to the next statement). This is by design, desired. If you do not go AHA!, if you do not appreciate the brilliance of this, please contemplate, and then ask /specific/ questions.

> It makes sense. It would remove the possibility of deadlock by ensuring that all locks are consistently acquired in the same order.

Yes. That is the only way to eliminate deadlocks. An overall Access Sequence, which has to be defined at the data modelling level. Here, it is a simple stepping through the hierarchy.

>>>>
That is at the code level.
80% of preventing deadlocks is in the data model. The /RM/ requires the dat to be arranged in trees (Directed Acyclic Graphs); no circular references. But too many binary relations cause problems as well. Again, the need for an overall Access Sequence.
With the 1960’s Record Filing System that the pig poop eaters keep marketing and teaching as “relational”, and their forced circular references, the “database” ends up like a spiders web. You might be familiar with that. Guaranteed deadlocks.
<<<<

> Also, even if for a moment there is contention (say, on dbo.Person in my example) it'll keep progressing immediately afterwards with a relatively good prospect of still succeeding.

The prospects are always good, given the Validate block in both the client and the sproc.

Blocking is not the problem. Again, forget the hysterical mantra. In a restaurant with 50 seas and two toilets (washrooms to my American friends), we want the doors to have exclusive locks, not no-locks, not shared-locks. (I am not interested in evaluating marginal cases.) Blocks are the civilised way to avoid conflict, by letting one connection through and blocking the others.

The problem is not blocking (blocking works perfectly), but (a) connections that are blocked (blockers and blockees) while holding a large number of locks, and (b) block duration, wherein we want millisecs (whereas lock duration is microsecs). Two or three of those guys, and you have railway trains running on surface streets. Prevention is simply limiting transaction size, which means destroying the GUI designers notion of a fat and “clever” GUI. Some developers scream like stuck pigs when they hear that implicit transactions are banned, others have seizures when the myopic “persistence” is cancelled (what, no CRUD ?!??!). Again, more affected by the data model than the code.

Being blocked means a connection level context switch in the engine, which is the most expensive operation.

The behaviour you describe is intended.

You can elevate that code by set a maximum wait time for locks; rolling back; etc.

> Or if the circumstances have changed (e.g.: the transaction that blocked it actually removed the Person row), it'll quickly detect that the row is not there once unblocked, rollback, and then return control to the caller.

Exactly.

The behaviour you describe is intended.

There is more to that, which we will get into shortly.

I trust you appreciate, in serious subjects, learning has to be in increments.

> ---
>
> Question: What context are these two used in?
> > EXEC sp_addmessage 20004, '%1!: A transaction has been opened by the caller (but is declared NOT to be open).', "us_english", FALSE, "REPLACE"
> > EXEC sp_addmessage 20005, '%1!: A transaction has NOT been opened by the caller (but is declared to be open).', "us_english", FALSE, "REPLACE"
> The other two made a lot of sense. But I'm not sure what you mean by 'declared to be'.

Mistake, copy-paste error. I should have excluded that. You can ignore it.

But since it has been exposed, I will explain what I can without giving away the shop. Basically, I have a deeper level of Transaction control. If one implements such, it has to be immune to the nuances of Transaction control provided in the platform. Sometimes I have to work on a system in which the developers are hostile: they have implicit transactions and all manner of non-white filth, it will take six months to get them to replace it according to Standards. So the Transactions that my guys write have to operate in that mixed environment for a while, sometimes sabotaged by the calling code doing nasty things. My template catches all that, and these are the messages. The setting (declaration) is in connection memory, not in @variables or @@variables.

At the banks, they call it “hardening” the server, making it immune to mischief or hack. SG complies with various finance industry standards, all the way down to the code.

That is probably enough for a guy like you to roll your own.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 29, 2021, 9:39:26 PM6/29/21
to
Danno

This covers generic SQL, the context of this thread.

Specific SQL (eg. your MS/SQL) is covered in your github thread.

Please take my comments as coming from a hard-core [nothing breaks] Rdb supplier, with rigid standards ... either existing standards plus our Extensions that are required to make them complete (eg. IDEF1X), or where there are no standards, our proprietary SG Standards.
__ Eg. the Open Architecture Standard has existed since Britton-Lee (inventor of the Client/Server Architecture), subsequently Sybase (published Client/Server Architecture), published it in the early 1980's, but it is not promoted; not well known, so we have maintained it and made it more precise. It has not changed since 1993.
__ Eg. the OLTP Standard (within the Open Architecture Standard) is pretty much SG alone, AFAIK. No doubt there are a few other high-end suppliers who have something similar, but it is not published. Much of it is considered proprietary.

We work in financial markets, thus we have additional Standards and Methods, again with SG Extensions (eg. Security & Auditing in the server), which are not included here, as this is a generic, not vertical-specific, exchange.

SQL
We work in Sybase, now SAP/Sybase, only, and deliver code in that flavour of SQL only ... but with a generic SQL intent, which means the older definitions of the SQL Standard, specifically dismissing that late additions because they are insane (promoted and imposed by the pig sucker brigade [Date; Darwen; Fagin; MV-non-CC groupies; etc] ). The developer staff at the target system are free to translate that delivery into their particular flavour of SQL.

Obviously that means commercial SQL Platforms, it excludes the pretend "SQLs"; the freeware; etc, and Oracle (it has no ACID, its declaration of SQL compliance is false, one if forced to use non-set processing [refer my posts re the Sybase vs Oracle benchmark] ).

Yes, we know MS/SQL very well (it is the bastard son of Sybase). No, we do not keep up with the changes.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 30, 2021, 6:14:30 AM6/30/21
to
Danno

The high-end market have always had, and we have never left, Ordinary Locking.

The MV-non-CC market, that has never worked, but is heavily propagandised by the academics, (all of whom live in the asylum by insistent choice, safely removed from reality), has caused the high-end SQL Platform suppliers to **ADD ON** Snapshot Isolation. (Minus present company, of course.)

It is not a binary choice, because the freaky thing cannot work, thus it is an Add-On to the thing that does work.

/
Although that is a more honest label than the false "MVCC" because "MVCC" does not have any Concurrency Control, my label /Offline Version Support/ is more accurate still. I don't accept the notion of /Isolation/ because it is schizophrenic: Sybase and MS erect the /Isolation/ fantasy for the poor souls that can't handle reality of a shared database, at additional resource cost, because nothing is free, not even in the asylum ... but at COMMIT time, the fantasy has to be committed to reality ... at which point, oopsey doopsey poopsey that fantasy Offline Version gets thrown out for being too stale. Hint the real OLTP Transactions blew the doors off the snapshot while the developers were adoring their snapshot; the train left the station; the ship has sailed; the carriage changed back into a pumpkin.
/

The real problem, the core issue, is not MV-non-CC. It is the complete lack of understanding of Transactions; ACID Transactions; what the frog **ONLINE** means; what Online Transaction Processing means. Such is the state of academics, and has been for FIFTY YEARS. Such are the developers who code for MV-non-CC. Enabled by the pig poop eating academics and teachers. Empowered schizophrenia. Those developers, with their steadfast refusal to engage with reality, whose fantasy of safe isolation is their existential reality. That is the problem.

I think you understand some of that, which is why you are here. But not all of it.

If you have an interest in the MV-non-CC issues as it relates to Ordinary Locking, please read the /MVCC Advantages & Disadvantages thread/
__ https://groups.google.com/g/comp.databases.theory/c/f474bCuvZ_A/m/tPseucr6AQAJ

Cheers
Derek

Daniel Loth

unread,
Jun 30, 2021, 9:27:55 AM6/30/21
to
Hi Derek,

Lots of stuff here. I probably won't be able to reply to all of it tonight.

---

> Horrible. MS has always had a special way of doing things, which is always more work; more code that can break; more things to go wrong; more things to administer. Have they taken away “user defined” error messages ? What happened to good old backward-compatibility.

Having given it some thought, I don't think there's any need for XACT_ABORT in the code I've written. I've explicitly handled rollback. So it's probably quite safe for me to use RAISERROR and add the messages to sys.messages as originally proposed. Something for tomorrow.

---

> Yes. That is the only way to eliminate deadlocks. An overall Access Sequence, which has to be defined at the data modelling level. Here, it is a simple stepping through the hierarchy.

I've actually put an image of my data model diagram in that GitHub gist comment section. It's a work-in-progress, but it's an indication of what I'm going for with it. So far I think it has reasonable hierarchy to it. There are no cycles or anything like that.

I can't say for sure if I've modelled the 'Location' stuff right, and I'm kinda unsure about the 'Participation' stuff too.

'Participation' in this context being a lawfully valid participation. In NSW, that would be one of: Officiated a match OR participated in an endorsed competition at another club OR firearms safety training OR target practice.

Also, the keys - I'm not sure how correct they are at this point.

So yeah, work in progress only. Not finished, probably not entirely correct either in terms of what I've already got there.

The vision, in a nutshell, is club management software with support for:
1. Scheduling / Timetabling (events on ranges, and the participants)
2. Competition planning and scoring (squadding, range duties, etc)
3. Membership management
4. Compliance reporting based on an arbitrary Reporting Year (Financial Year / Calendar Year / Club Reporting Year)
5. General reporting (e.g.: in support of forward planning, the number of matches shot in a Quarter might be a proxy for how many new targets and target patches are required for next Quarter).

And further to that, I'd like the compliance elements expressed as rows of data rather than elements of the schema (i.e., I don't want to have columns specific to AU-NSW and AU-SA).

So far I've found it relatively tricky to model it in a way that it's all nicely integrated.

---

> But since it has been exposed, I will explain what I can without giving away the shop. Basically, I have a deeper level of Transaction control. If one implements such, it has to be immune to the nuances of Transaction control provided in the platform. Sometimes I have to work on a system in which the developers are hostile: they have implicit transactions and all manner of non-white filth, it will take six months to get them to replace it according to Standards. So the Transactions that my guys write have to operate in that mixed environment for a while, sometimes sabotaged by the calling code doing nasty things. My template catches all that, and these are the messages. The setting (declaration) is in connection memory, not in @variables or @@variables.

> At the banks, they call it “hardening” the server, making it immune to mischief or hack. SG complies with various finance industry standards, all the way down to the code.

That makes sense now. I hadn't considered hostility or saboteurs when looking at them before.

---

> The real problem, the core issue, is not MV-non-CC. It is the complete lack of understanding of Transactions; ACID Transactions; what the frog **ONLINE** means; what Online Transaction Processing means. Such is the state of academics, and has been for FIFTY YEARS. Such are the developers who code for MV-non-CC. Enabled by the pig poop eating academics and teachers. Empowered schizophrenia. Those developers, with their steadfast refusal to engage with reality, whose fantasy of safe isolation is their existential reality. That is the problem.

I agree that there's a dearth of good content out there. For example, I'm not aware of any book that adequately covers. A lot of the mainstream books wind up going down the same path: Poor practice, little to no consideration of methodical design, and ultimately yielding a so-called database with little to no integrity.

MVCC is perceived as an advantage because it allows tools like ORMs to get away with murder (e.g.: deleting 50,000 rows, or aggregating 200,000 rows in the context of a browser's HTTP request, or just about anything else that a developer tests with just 10 rows of data locally before shipping it to production).

If development practice were markedly better then I think people would recognise it for all of its disadvantages and reconsider their decision to use it.

---

Regarding Sybase: I've never worked with it personally. If I'm honest, I find SQL Server pretty good because the associated tooling is pretty good. For example, I'm pretty fond of DacFx and DACPACs and the declarative approach that they afford you when working with databases. Declarative in the sense that you tell it you want your table to have these columns / constraints / keys / indexes, and it figures out what it needs to do to bring a target database into alignment with it.

---

Regarding the Open Architecture standard: I'll have to look up the original sometime. I've only really looked at your PDF (this one: http://www.softwaregems.com.au/Documents/Article/Application%20Architecture/Open%20Architecture.pdf)

---

Cheers,
Daniel

Nicola

unread,
Jul 2, 2021, 4:55:38 PM7/2/21
to
On 2021-06-27, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> For ease of viewing, I've published all of the code below on GitHub too:
>> https://gist.github.com/DanielLoth/76d241515655e76cadddef6ed2d373aa

Thanks. That's my attempt at porting to PostgreSQL:

https://gist.github.com/nv-uniud/dd0008c8684d9ff50fbc814466d09ff9

I see that you have continued your discussion on Github, but I haven't
kept up.

> __3 If exists Attendance[ OrganisationId, PersonId, AttendanceDate ] return 7
> ____ HOLDLOCK

Why does this need holdlock, too? Locks on Organisation and Person have
already been acquired at this point:

if exists (
select 1
from dbo.Attendance
with (holdlock)
where OrganisationId = @OrganisationId
and PersonId = @PersonId
and AttendanceDate = @AttendanceDate
)
begin
rollback;
return 7; -- Attendance on the given date has already been recorded.
end

> Nicola, note how that Batch Xact fits here. It needs to lock only the
> highest row in the hierarchy, and that will prevent any other OLTP
> Xact from getting into the tree.

I have to review our previous discussion. I'll come back to you if
I need futher clarifications.

Certainly, with these new threads I better understand your criticisms.

Nicola

Derek Ignatius Asirvadem

unread,
Jul 3, 2021, 4:45:44 AM7/3/21
to
> On Saturday, 3 July 2021 at 06:55:38 UTC+10, Nicola wrote:
> > On 2021-06-27, Derek Ignatius Asirvadem wrote:

> >> For ease of viewing, I've published all of the code below on GitHub too:
> >> https://gist.github.com/DanielLoth/76d241515655e76cadddef6ed2d373aa

> Thanks. That's my attempt at porting to PostgreSQL:
>
> https://gist.github.com/nv-uniud/dd0008c8684d9ff50fbc814466d09ff9

Great. I can see a lot a value in such a port, no argument, but the purpose is unclear. If I knew the purpose, my responses would be more specific.

> I see that you have continued your discussion on Github, but I haven't
> kept up.

Well, re this thread & subject, I think we are keeping the discussion here, the code (for you and Daniel) there. There is a separate discussion re Dan’s DM there (the interface is better.)

Dan, if it is ok with you, please do the same.

Nicola, no time now to look at your GitHub entry, I will get back to you with comments/questions. Comments on the code only there, comments on structure and requirement here.

> > __3 If exists Attendance[ OrganisationId, PersonId, AttendanceDate ] return 7
> > ____ HOLDLOCK
>
> Why does this need holdlock, too? Locks on Organisation and Person have
> already been acquired at this point:
> if exists (
> select 1
> from dbo.Attendance
> with (holdlock)
> where OrganisationId = @OrganisationId
> and PersonId = @PersonId
> and AttendanceDate = @AttendanceDate
> )
> begin
> rollback;
> return 7; -- Attendance on the given date has already been recorded.
> end

In chronological order.
- In Sybase at IL[3], it is an automatic HoldLock (hold page until end of Xact)
--- therefore if the Attendance row exists, the lock on the page is held until end of Xact
- MS/SQL was stolen from Sybase at V4.9.2, but MS have rewritten the codeline several times, so while they are identical to a great degree, especially re the syntax, but at this level of detail they may not be. So when discussing with Dan, to be clear, I indicated to him that we want a HOLDLOCK in MS equivalent terms:. It is purely documentary. That is for Organisation; Person; and Attendance, after the BEGIN TRAN.

Put another way, in order to obtain a HOLDLOCK, one has to have Isolation Level 3, which means one has to have a Transaction open.

And now, because you have ported the code, it is an indicator to you, to do whatever is equivalent in your program suite.

__ https://help.sap.com/viewer/e0d4539d39c34f52ae9ef822c2060077/16.0.4.0/en-US/ab3e9a97bc2b10149c02ff1c52f92cb4.html

> Certainly, with these new threads I better understand your criticisms.

Good. Please appreciate, when I give a course to developers, it is a fixed 3-day affair, with a full set of lecture notes, max six people. Sometimes 6 x half-days. For mucho lira. Of course that is a full course on OLTP, before we get to Sybase, before they can understand and use the SG OLTP Standard & sp template. I can’t do that here, I am answering questions only. So it is gearing you up in Transactions & OLTP, the hard way.

It is not “criticism” [of MV-non-CC]. It is a tutorial to drag academics and developers out of the MV-non-CC/PusGres mindset, into the Transaction Processing that we have had since the 1960’s (pre-Relational) and the 1980’s (Relational platforms).

Eg. if you cannot see that Ordinary Locking use no additional resources (to the existing Data Storage & Cache), whereas MV-non-CC uses:
- masses of extra storage at every level [versions of rows], spread across the entire scope of data storage ... Dan has indicated further detail
- note the horrendous pressure on indices, especially active ones
- all of which has to be processed [sometimes re-read] at COMMIT time
--- which means re-reading many times for many COMMITS
- masses of additional memory in the cache: to whatever level the pos caches pages (as I understand it, these are duplicated across the run-time threads, but that is a separate issue consequent to not having an architecture), it has to use more cache
- obviously, for all that, far more processing power is consumed
- *AND*
- has to clean all dirty storage up
--- Vacuum
--- Shrink/Expand
--- Broom
--- Witch
--- etc
I have failed.

Erecting a fantasy (da stale versions “held” privately) is massively expensive. Maintaining a fantasy that is massively expensive, is even more expensive.

Whether it is done properly (added resources configured for version storage, on top of an OLTP server with Ordinary Locking & full ACID), or mickey mouse style (freeware with fantasy only, later with 2PL added, later still wit Predicate Locks added; etc), erecting a fantasy is stupid, and massively expensive. This is where the Stonefreaker cult has lead you; all academics; 10,000 developers for PoopGres; and all developers who use MV-nonCC program suites. Never mind reality, let’s all sing the Stonefreak mantra, because it feels good.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 3, 2021, 6:02:22 AM7/3/21
to
> On Saturday, 3 July 2021 at 06:55:38 UTC+10, Nicola wrote:
>
> https://gist.github.com/nv-uniud/dd0008c8684d9ff50fbc814466d09ff9

Discussion, not code.

Nicola, would you like differences between SQL and the PoopGres syntax discussed here, or on GitHub ?

----------

> -- From https://www.postgresql.org/docs/current/xproc.html:
> -- A procedure can commit or roll back transactions during its execution (then
> -- automatically beginning a new transaction),

So that is not a Transaction. That states that the freaky thing is always in a "transaction" mode of its own definition. This tells me that nothing has changed since V10. Functions (oopsey, the now sort-of stored procedure thingees) are "twunsackshunal". START TRANSACTION still does Fanny Adams.

And now, security xor "transactional-ness" is a binary choice.

God help me.

Now I understand better, much earlier in the thread, when you said somethings, wherein you assumed even reads were "transactional". That is really dangerous.

Do not, under any circumstances, think that that is SQL. No, it is PSuckDeadBearsGres only.

Cheers
Derek

Nicola

unread,
Jul 3, 2021, 6:28:55 AM7/3/21
to
On 2021-07-03, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> On Saturday, 3 July 2021 at 06:55:38 UTC+10, Nicola wrote:
>> > On 2021-06-27, Derek Ignatius Asirvadem wrote:
>
>> >> For ease of viewing, I've published all of the code below on GitHub too:
>> >> https://gist.github.com/DanielLoth/76d241515655e76cadddef6ed2d373aa
>
>> Thanks. That's my attempt at porting to PostgreSQL:
>>
>> https://gist.github.com/nv-uniud/dd0008c8684d9ff50fbc814466d09ff9
>
> Great. I can see a lot a value in such a port, no argument, but the
> purpose is unclear. If I knew the purpose, my responses would be more
> specific.

For now, it's just to see to what extent PostgreSQL supports that kind
of coding. Stored procedures were added recently, so that is just my
exploration of the feature.

Whether it makes sense to code that on in a similar way is still unclear
to me. It's a one page procedure performing several lookups just to
insert a record, a step which in PostgreSQL would conflict only with
a similar insert—and the duration of the conflict would be short. Also,
one purpose of your template is to avoid deadlocks, which in MVCC cannot
happen anyway. In fact, adding explicit locking might introduce the
possibility of deadlocks.
Ok.

> And now, because you have ported the code, it is an indicator to you,
> to do whatever is equivalent in your program suite.
>
> __
> https://help.sap.com/viewer/e0d4539d39c34f52ae9ef822c2060077/16.0.4.0/en-US/ab3e9a97bc2b10149c02ff1c52f92cb4.html

Thanks.

>> Certainly, with these new threads I better understand your criticisms.
>
> It is not “criticism” [of MV-non-CC]. It is a tutorial to drag
> academics and developers out of the MV-non-CC/PusGres mindset, into
> the Transaction Processing that we have had since the 1960’s
> (pre-Relational) and the 1980’s (Relational platforms).

First, the "inconvenient truth"—to put it as in some recent paper
[0]—i.e., the issues you and Daniel mention about MVCC, is well known
also in academia. That is not a reason to abandon the technique, rather
a push towards further research to improve it or to find contexts in
which it might provide some advantage—which may not be the conventional
disk-based storage model which you seem to assume.

Second, the only point I wanted to make when we first started talking
about MVCC is that it is possible to obtain serializable schedules with
(an extension of) that protocol, which you did (do) not want to accept.

Then, if you want to argue that in the wider "ACID/OLTP context" that
does not matter because the performance sucks, long-running transactions
work with stale data, perfectly reasonable schedules can never happen
under MVCC, isolation is excessive, etc., then we have little to argue
about yet because I agree with you and Daniel on all those points.

Nicola

[0] Rethink the Scan in MVCC Databases, SIGMMOD'21
https://dl.acm.org/doi/pdf/10.1145/3448016.3452783

Nicola

unread,
Jul 4, 2021, 6:08:34 AM7/4/21
to
On 2021-07-03, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> On Saturday, 3 July 2021 at 06:55:38 UTC+10, Nicola wrote:
>>
>> https://gist.github.com/nv-uniud/dd0008c8684d9ff50fbc814466d09ff9
>
> Discussion, not code.
>
> Nicola, would you like differences between SQL and the PoopGres syntax
> discussed here, or on GitHub ?

I'd rather keep the discussion here.

> ----------
>
>> -- From https://www.postgresql.org/docs/current/xproc.html:
>> -- A procedure can commit or roll back transactions during its execution (then
>> -- automatically beginning a new transaction),
>
> So that is not a Transaction. That states that the freaky thing is
> always in a "transaction" mode of its own definition. This tells me
> that nothing has changed since V10. Functions (oopsey, the now
> sort-of stored procedure thingees) are "twunsackshunal". START
> TRANSACTION still does Fanny Adams.

Can you elaborate on that? If I do:

start transaction;
-- Some stuff
commit;

select ...; -- (1)

insert ...; -- (2)

start transaction;
-- Some stuff
commit;

don't (1) and (2) constitute each a transaction?

> And now, security xor "transactional-ness" is a binary choice.

I don't understand what this means.

Nicola

Derek Ignatius Asirvadem

unread,
Jul 4, 2021, 6:41:52 AM7/4/21
to
> On Saturday, 3 July 2021 at 20:28:55 UTC+10, Nicola wrote:
> > On 2021-07-03, Derek Ignatius Asirvadem wrote:
> >> On Saturday, 3 July 2021 at 06:55:38 UTC+10, Nicola wrote:
> >
> >> That's my attempt at porting to PostgreSQL:
> >>
> >> https://gist.github.com/nv-uniud/dd0008c8684d9ff50fbc814466d09ff9
> >
> > Great. I can see a lot a value in such a port, no argument, but the
> > purpose is unclear. If I knew the purpose, my responses would be more
> > specific.
>
> For now, it's just to see to what extent PostgreSQL supports that kind
> of coding. Stored procedures were added recently, so that is just my
> exploration of the feature.
>
> Whether it makes sense to code that on in a similar way is still unclear
> to me. It's a one page procedure performing several lookups just to
> insert a record, a step which in PostgreSQL would conflict only with
> a similar insert—and the duration of the conflict would be short.

Sure, that is “true for you” in the fantasy world of MV-non-CC. Remember, it has no Concurrency Control. Every verb is a false “COMMIT”. I can’t see any sense in just coding it, without trying anything. Hence, /what is the purpose ?/

OTOH, if you want to see what it is like to operate in reality, the world of a single shared resource, which means Controlling Concurrency (and Currency), you have to accept the mechanism of a Lock Manager (or produce something new). Then yes, you have to code lock commands, same as everyone else who has to elevate their code in PissGriss after suffering horrendous problems with vanilla MV-non-CC.

> In fact, adding explicit locking might introduce the
> possibility of deadlocks.

Yes, because you are forced to stop thinking like a 5-year-old girl, and to start thinking like a 7-year-old boy.
Yes, because all deadlocks are created; written by the developers (hint: there are no deadlocks in the distribution media).
Yes, because you have to be responsible (as opposed to the Disneyland MV-non-CC groupies, who are NOT responsible, which is the premise of the Mental Health Act ... I do not use terms such as /schizophrenic/ lightly).

> which in MVCC cannot
> happen anyway.

It is your code, you have to choose:
__ either meaningless code that uses MV-non-CC and is Anti-Transaction Processing; Anti-ACID, Anti-SQL;
__ or code attempting to reach the first rung of Transaction Processing (we know it cannot get to even the third rung), and because MV-non-CC is a golden farce, and because they have added a “2PL” lock mangler as well as a Predicate Lock mangler, to go beyond the farce, you *do* need to code locking commands.

Which one cannot do on an SQL Platform, it is stupid and dangerous, and therefore not permitted.

> Also,
> one purpose of your template is to avoid deadlocks, which in MVCC cannot
> happen anyway.

Bringing that back to me.
Yes, it is one purpose, but very secondary. The purpose is:

1. To maintain Currency of data (one version of the truth, the latest)
__ (FYI. You have dived deep into this one, and we have closed many holes, but it is not complete, there are two holes that are well-known by us since 1965, that have not been identified by you or closed by me providing the solution.)

2. To allow efficient sharing of data (that one version of the truth)
__ (FYI. There is some appreciation, but not a proper understanding, you do not have the method yet.)

3. To determine conflicts at the earliest point, and to block at the earliest point.
__ thereby holding the least locks, and blocking for the smallest duration.

4. Obviously, that means some sequence in each Transaction.
__ The more Xacts that use *A* sequence, the better. That progresses to collections of Xacts that use *A* particular Sequence. Deadlocks are prevented within each Sequence.
__ If there is a single overall Sequence, deadlocks are prevented across the system.

See the problem, working from the bottom, up. That is why proper education (eg. our 3-day course) is top-down: name all the problems; then demonstrate the solution to all. Here we have worked bottom-up (no problem, that is the result of answering questions), we have some understanding of the method, which may give false confidence, and we still have not even identified all the holes; what the method does. It is incomplete but you all are ignorant that it is incomplete.

Thus, in addition to
a. clarifying the purpose of your code and progressing it,
b. please continue your digging into /what could go wrong/ in a Transaction Processing context.
__ Two holes, well-known to us of the Transaction Processing fraternity since 1965, remain.

__ And of course the solution.

> > It is not “criticism” [of MV-non-CC]. It is a tutorial to drag
> > academics and developers out of the MV-non-CC/PusGres mindset, into
> > the Transaction Processing that we have had since the 1960’s
> > (pre-Relational) and the 1980’s (Relational platforms).
>
> First, the "inconvenient truth"—to put it as in some recent paper
> [0]—i.e., the issues you and Daniel mention about MVCC, is well known
> also in academia.

At what date ?
If it was post-1965, it is decades too late, and proves my position.

> issues

No, just one issue.
The other issues all remain unknown to academics.

> That is not a reason to abandon the technique, rather
> a push towards further research to improve it or to find contexts in
> which it might provide some advantage—which may not be the conventional
> disk-based storage model which you seem to assume.

I love it.
Ingres never worked but Oracle will
Oracle never worked but PonGres will
Pongres.x never worked but PonGres.x+1 will
Pongres.x+1 never worked but PonGres.x+2 will
Pongres.x+2 never worked but PonGres.x+3 will
Pongres.x+99 never worked but PonGres.x+100 will
And now, ta da, it might work on the fourth moon of Jupiter, where they have no persistent storage.

You can’t make this stuff up.

> Second, the only point I wanted to make when we first started talking
> about MVCC is that it is possible to obtain serializable schedules with
> (an extension of) that protocol, which you did (do) not want to accept.

It is not that I do not accept it, it is that it is totally irrelevant. I am saying it may well be “true for you” in the mathematical sense, and there may well be 30 good papers on it, but it is total bunkum, because the problem is not a problem that is generic to MV-non-CC systems or to OLTP, it is specific to the MV-non-CC flavour of PoopGres.

1. We don’t not have “serialisation” problems, therefore we do not need a “serialisation schedule” to fix the “serialisation” problems that we do not have.

2. Sure, all Disneyland fantasy groupies, defined as such because they have MV-non-CC, have *A* problem with resolving their morass of the multiple versions across the entire scope of data storage, at COMMIT time (as opposed to “serialisation” problems). That is, at the point where fantasy has to be reconciled with reality, there is a problem. All MV-non-CC program suites *except PoopGross* have solved the problem without reference to “serialisation” problems, decades before academics created their “serialisation” problems.

3. Now for PoopGres, which is driven by academics. Instead of understanding that IL[3 SERIALIZABLE] means Transactions *appear to be* serialised, which means internal non-server methods of resolution, which [2] have, the idiots take it that Transactions must be serialised. Stupidity on top of fantasy. Then they are so involved in their own “superiority”, which is the hallmark of their total ignorance of reality; of the requirement for internal non-server resolution, that they come up with “serialisation schedules”, and even a “taxonomy of serialisation problems”. All of which make the fantasy more “real”, and all of which have no bearing on reality. But it does produce the usual raft of papers, and citations, and work for the idle.

It just provides evidence for what I have stated severally, that academics are clueless about (a) reality, (b) Transaction Processing, (c) the problems to be addressed in Transaction Processing, and (d) instead have a contrived notion of [a][b][c], and (e) have wonderful Alice-in-Wonderland notions about their notions, fantasy upon fantasy. Of course, they need that to confirm their fantasy world, because it is fragile, and needs constant validation.

Meanwhile, back at the farm [1], that actually produces something relevant to reality, we do not have [2] or [3], or the “solution” for [3].

It is like you are professing that drug xyz cures syphilis, and marketing it to us, who do not go to brothels, and thus cannot contract it, and thus have no need for it.

> Then, if you want to argue that in the wider "ACID/OLTP context" that
> does not matter because the performance sucks, long-running transactions
> work with stale data, perfectly reasonable schedules can never happen
> under MVCC, isolation is excessive, etc., then we have little to argue
> about yet because I agree with you and Daniel on all those points.

Ok. Welcome to the club, I hope you enjoy your stay.

In order to gain experience with a real SQL platform, why don’t you install MS/SQL or Sybase. Both provide free versions. Your students can walk into a job due to having experience with actual SQL platforms, instead of ancient anti-experience on fantasy program suites. Why maintain unscientific loyalty to this known-to-be-broken, and known to be non-SQL, and known-to-be pathetically lacking 1980’s Platform features, let alone modern platform features, for decades.

> [0] Rethink the Scan in MVCC Databases, SIGMMOD'21

Thanks. I don’t read anything Chinese. They are impressive only to academia, who will not listen to the truth from their own people for fifty years, but somehow manage to hear it from the Chinese fifty years later. I do not suffer from such prejudices.

They are at war with Australia and Australians. Aren’t they at war with Italy ? Didn’t they flood Northern Italy with their CCP virus last year, and make you kiss Chinese in the street ? It is “not harmful”, remember.

Eg. there is no such thing as a “MVCC Database”, it is an anti-method in the MV-non-CC suite of programs. With a false opening statement like that, I need read no further.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 4, 2021, 10:05:24 AM7/4/21
to
> On Sunday, 4 July 2021 at 20:08:34 UTC+10, Nicola wrote:
> > On 2021-07-03, Derek Ignatius Asirvadem wrote:
> >
> >> -- From https://www.postgresql.org/docs/current/xproc.html:
> >> -- A procedure can commit or roll back transactions during its execution (then
> >> -- automatically beginning a new transaction),
> >
> > So that is not a Transaction. That states that the freaky thing is
> > always in a "transaction" mode of its own definition. This tells me
> > that nothing has changed since V10. Functions (oopsey, the now
> > sort-of stored procedure thingees) are "twunsackshunal". START
> > TRANSACTION still does Fanny Adams.
>
> Can you elaborate on that? If I do:

Sure.
Before I do ...

> start transaction;
> -- Some stuff
> commit;
>
> select ...; -- (1)
>
> insert ...; -- (2)
>
> start transaction;
> -- Some stuff
> commit;
>
> don't (1) and (2) constitute each a transaction?

What definition of "transaction" are you using ?

Cheers
Derek

Nicola

unread,
Jul 4, 2021, 12:02:33 PM7/4/21
to
A logical unit of processing that is executed atomically (either all the
operations are carried out till the end or the database is not affected
in any way), does not violate the integrity constraints, does not
interfere with other transactions (in a way that produces results that
are incompatible with any serial execution of the same transactions),
and whose results are guaranteed to be persistent, unless the
transaction fails for some reason.

Syntactically, anything between "start transaction" and "commit". Single
commands are run as if enclosed between "start transaction"/"commit".

If I understand correctly, by "transaction" you mean what is executed by
the whole stored procedure, which according to your sketched template
consists of several "transactions" in the sense above.

Nicola

Derek Ignatius Asirvadem

unread,
Jul 4, 2021, 6:01:02 PM7/4/21
to
> On Monday, 5 July 2021 at 02:02:33 UTC+10, Nicola wrote:
All this time (two years) that we have been discussing these things, the context has been:
__ database, specifically one that complies with Codd's /Relational Model/;
__ SQL, specifically the Standard
__ ACID Transactions, which were available in the very first SQL Platforms, and identified as a requirement in the very first publication of SQL (because ACID Transactions existed in pre-Relational DBMS Platforms). We have specifically discussed ACID, which is a particular /implementation/ of Transaction, that is required for SQL compliance.

Note that the first flavour of SQL that was available to the public was, and still is, "Transact-SQL".

Therefore I am at a total loss as to how you come up with that gobbledegook, oink oink oink, as a "definition" of "transaction", eg. you do not reference ACID.

Words have meaning, they are used to communicate effectively. At least for people who have a language (that excludes the Chinese (all sub-races); all tribal people; etc). I thought you were Italian, which is derived from Latin, a beloved language of mine, and from the same root (Indo-European group of languages) as that which we are using, English.

Definitions of words (the meaning) do not change. (If it changes it is not a definition.)

If you attempt to change a definition, that has been established sine 1965, you need a kind of help that I am not qualified to provide.

Please identify which definition (established; within the context we are in; not personal) of "transaction" you are using. And please stick to it (do not switch to another definition). So that our conversation can progress.

(If you wish to discuss the insanity that non-SQLs and "emerging non-database systems" have for "transactions", you really should open a separate thread, and declare the context. In this thread, it is ANSI/ISO/IEEEE SQL, and ACID only.)

Yes, I am quite aware that your pretend-SQL has no Transactions. Yes, I understand that the academics (this time including you) seek to confuse the definition of various things in SQL, such that their filth can be twisted into being perceived as the SQL things. Sorry, I will not participate in that. Further, I thought you wished to leave that asylum, that you were crossing the chasm, and reaching for the real world; reality; definitions that do not change; implementation. No, you can't go back to the asylum. No, you can't hold the asylum mindset in the real world.

> If I understand correctly, by "transaction" you mean what is executed by
> the whole stored procedure, which according to your sketched template
> consists of several "transactions" in the sense above.

I did not say anything, I just gave SQL Verbs. I did not define or redefine anything re "transaction", the notion of harbouring private definitions in a science, that has definitions, is too freaky for me. Please do not put words in my mouth. If you interpret something I wrote, please be responsible and own that interpretation.

I will not respond to that para, until you have indicated what definition for "transaction" you are using.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 4, 2021, 6:23:54 PM7/4/21
to
> On Monday, 5 July 2021 at 08:01:02 UTC+10, Derek Ignatius Asirvadem wrote:

> the context has been:
> __ database, specifically one that complies with Codd's /Relational Model/;
> __ SQL, specifically the Standard
> __ ACID Transactions,

> In this thread, it is ANSI/ISO/IEEEE SQL, and ACID only.

Further, note that Daniel started this thread, and the context [further to the above] is SQL platforms using a Lock Manager (not MVCC), and how to construct Transactions therein, properly.

I am happy to entertain discussion about non-SQL fraudulently proposed as "SQL" to a degree, and non-Transaction in program suites that do not have Transactions, to a degree. But I will not stand for anyone imposing novel definitions, because that introduces confusion (which we, at least Dan and I, specifically reject), and it will sabotage the discussion.

Rather than answering this and the previous post in argumentative terms, re the low level issues, you may choose to rollback to the point where we were **NOT** confused, and move forward from that point.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 4, 2021, 6:27:49 PM7/4/21
to
> On Sunday, 4 July 2021 at 20:08:34 UTC+10, Nicola wrote:
> > On 2021-07-03, Derek Ignatius Asirvadem wrote:
>
> > And now, security xor "transactional-ness" is a binary choice.
>
> I don't understand what this means.

Whatever is meant by this:
> -- «A SECURITY DEFINER procedure cannot execute transaction control statements
> -- (for example, COMMIT and ROLLBACK […])»

and this:
> 118 security invoker -- “security definer” is not compatible with commit/rollback :(

Since the “development team” is spread across the galaxy (geographic as well as intellectual), in order to make any progress at all, each actual team develops one feature, in isolation from the rest of the codeline, and then anti-integrates it into the codeline, thus typically, in every progressive release of the program suite, the **use** of one feature destroys the use of some other feature. Or two. Common problem in freeware. Non-existent in commercial SQL Platforms.

Add to that, the ever-changing notion of; and implementation of, Transactions; functions; stored procs; security; etc, and you have mickey mouse squared; insanity squared.

Thus “security” implementation breaks the "transactional-ness" implementation, and the "transactional-ness" implementation breaks the “security” implementation, thus it is a binary choice.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 4, 2021, 9:05:19 PM7/4/21
to
Nicola

------------------------
-- Checkpoint --
------------------------

This thread has been hijacked (I am not saying that you did it on purpose, but as evidenced, this is the insanity that is academia; endless distractions; redefinition of standard termss; etc). I am resetting it, so that its initial purpose can be rendered, for implementation purposes, by normal people. My apologies for allowing it to get hijacked in the first place.

The title of this thread, started by Dan, for a particular purpose, is:
__ Stored procedure structure in RDBMS using Lock Manager for transaction isolation
It is not complete, we must not get distracted or hijacked or confused.

I accept that you do not submit to the Four Laws of Thought, and thus what you teach and practice is not science, but Modern “science”. I am rigid scientist. You need to accept that Dan and I submit to the Four Laws, that he is seeking a finite (not never-ending) discussion with a view to forming an implementation Standard for SQL/ACID Transactions in an OLTP environment (reduced or eliminated contention).

__________
Retracted.

Note, all your posts in this thread, following this:
>>>> On Saturday, 3 July 2021 at 06:55:38 UTC+10, Nicola wrote:
have nothing to do with this thread, and should have been posted under the relevant thread:
__ MVCC, Advantages & Disadvantages
or a perhaps new thread, such as:
__ How can I implement OLTP/SQL/ACID in my non-OLTP, non-SQL, non-ACID, non-Concurrency Controlled, pitiful suite of programs that keep changing.

This thread is for Dan's initial stated purpose only. And anyone who has similar desires. Or who wishes to genuinely understand it for what it is, not one who imposes redefinitions from the asylum.

Dan, in order to preserve sanity, such that we can progress, and close this thread, please ignore such posts.

> > Please identify which definition (established; within the context we are in; not personal) of "transaction" you are using. And please stick to it (do not switch to another definition). So that our conversation can progress.

I retract that request. The definitions that apply for sanity in general, and for this thread in particular, are those that have been established, since 1965:
__ OLTP (Online shared resource)
____ OLTP/ACID Transaction (pre-Relational)
__ SQL (ANSI/ISO/IEE SQL only)
____ SQL/OLTP (commercial SQL Platforms only)
____ SQL/ACID Transaction (ACID in the Relational context)

Please keep redefinition of standard terms, and any questions about such, in the relevant thread.


> On Saturday, 3 July 2021 at 06:55:38 UTC+10, Nicola wrote:
>
> That's my attempt at porting to PostgreSQL:
>
> https://gist.github.com/nv-uniud/dd0008c8684d9ff50fbc814466d09ff9

Noting that which I stated previously, if and when you propose a purpose for that, I will respond.

Now I state further, that submission is irrelevant to this thread.

If and when you write code in your pretend-SQL non-Transaction, non-Concurrency Controlled, suite of programs, that is a genuine attempt to implement whatever portions of a Transaction that you can implement in it, it might be a relevant submission. And in any case, please post that in the relevant thread:
__ MVCC, Advantages & Disadvantages

This thread is strictly OLTP/SQL/ACID/Transactions using a Lock Manager (not Snapshot Isolation, because Snapshot Isolation cannot provide ACID).

The deeper problem is, the schizophrenic notion that the database is not shared resource (sorry, you are not the only user; it is not single-user); the database is not online (sorry, it is online, it keeps changing); and thus you can grab and hold a version (sorry, it is stale from the moment you grab it). Yes, of course, all that should be ventilated, in order for you to join the rest of us in the real world, but not here, in the relevant thread, please.

> > Nicola, note how that Batch Xact fits here. It needs to lock only the
> > highest row in the hierarchy, and that will prevent any other OLTP
> > Xact from getting into the tree.
>
> I have to review our previous discussion. I'll come back to you if
> I need futher clarifications.

Heeded. Awaited. Please choose carefully, whether such discussion should be here in this thread, or in the subject thread:
__ No CASCADE in Commercial SQL & Benchmark Considerations

---------------------------------------------
-- To Progress This Thread --
---------------------------------------------

> On Sunday, 4 July 2021 at 20:41:52 UTC+10, Derek Ignatius Asirvadem wrote:
>
> Thus, in addition to
> a. clarifying the purpose of your code and progressing it,
> b. please continue your digging into /what could go wrong/ in a Transaction Processing context.
> __ Two holes, well-known to us of the Transaction Processing fraternity since 1965, remain.
>
> __ And of course [c] the solution.

Now, for this thread, [a] is cancelled. Please place such discussions in the relevant thread.

Please keep [c] in mind, as the goal, and proceed with [b] in order to progress in that direction.

I repeat, the context for this thread, as per initial declaration, is:
__ OLTP (shared online database, upon which Transactions are processed)
____ implies a Lock Manager to (a) determine, and (b) resolve, Concurrency issues
____ SQL (standard-compliant, which means commercial only, and excludes Oracle)
______ ACID Transactions

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 5, 2021, 12:10:18 AM7/5/21
to
> On Monday, 5 July 2021 at 11:05:19 UTC+10, Derek Ignatius Asirvadem wrote:
>
> ---------------------------------------------
> -- To Progress This Thread --
> ---------------------------------------------
> > On Sunday, 4 July 2021 at 20:41:52 UTC+10, Derek Ignatius Asirvadem wrote:
> >
> > Thus, in addition to
> > a. clarifying the purpose of your code and progressing it,
> > b. please continue your digging into /what could go wrong/ in a Transaction Processing context.
> > __ Two holes, well-known to us of the Transaction Processing fraternity since 1965, remain.
> >
> > __ And of course [c] the solution.
>
> Now, for this thread, [a] is cancelled. Please place such discussions in the relevant thread.
>
> Please keep [c] in mind, as the goal, and proceed with [b] in order to progress in that direction.

These issues relate to Transactions, as we have had since 1965 (pre-Relational DBMS) and since 1984 (Relational DBMS), and which concepts and definitions cannot change. That was established entirely by the DBMS vendors, without any help from academia (with the exception of Codd in the post-Relational context).

The problem is, of course, the filth of academia imposing itself on reality, fifty years after these things were established. And the typical disgusting dishonesty, such as redefinition of established terms, usually without even the courtesy of declaring it. While insisting that they should not be concerned with implementation issues, they have contradicted themselves and implemented suite of programs for Anti-OLTP; Anti-SQL; Anti-ACID Transactions; and No Concurrency Control, fraudulently labelled "MVCC", using 10,000 enslaved young minds located across the planet.

These issues have been discussed here on c.d.t, with Nicola, the one academic who is attempting to cross the great chasm between their steadfast isolation from reality, and reality. For two years. Although I welcome it, and provide detailed answers, it has moved in fits and starts, and while progress is made, no clear directives are obtained.

In contrast, Dan has started this thread, which has a clear and finite goal [c], as per the title.

In order to:
__ clear up the confusion that has ensued,
__ define the whole context, in a consolidated manner
__ ie, identify all elements, and place them in the proper context,
__ maintain Sanity in the face of encroaching insanity, such that we can progress, hopefully to closure,
I have produced a document (cut-paste what I can, from lecture slides) which you may find helpful.

It shows:
__ the elements that we have discussed and closed,
____ which are fully articulated in Dan's GitHub thread
____ let's call that Chapter One
__ as well as the two elements [b] that need to be identified (redacted, but shown in proper position)
____ meaning that Chapter One is not complete
__ such that I can provide the Solution for the entire set of issues [c]; for the entire Chapter One
____ Let's call [c] Chapter Two

__ https://www.softwaregems.com.au/Documents/Article/Database/Transaction/Transaction%20Sanity.pdf

If anything is not crystal clear, please ask a question.

As you may notice, it covers our Chapter One, as progressed thus far, it is waiting for the completion, and then Chapter Two.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 5, 2021, 7:13:16 AM7/5/21
to
Nicola

Now that I have dismissed the dishonesty of redefined terms, and restated the boundaries for this thread:
__ OLTP (the outer-most context, and beyond what can be expected from SQL or ACID)
____ which had and has ACID Transactions
____ which is required any kind of Online Transaction Processing in an Online Database
__ SQL (ANSI/ISO/IEEE SQL, not the freeware filth that is presented as “SQL”)
____ which has ACID Transactions, and thus specific requirements for compliance
____ and therefore SQL syntax for it

... I can answer your question, otherwise the answer would have been laborious.


> On Sunday, 4 July 2021 at 20:08:34 UTC+10, Nicola wrote:
> > On 2021-07-03, Derek Ignatius Asirvadem wrote:
> >
> >> -- From https://www.postgresql.org/docs/current/xproc.html:
> >> -- A procedure can commit or roll back transactions during its execution (then
> >> -- automatically beginning a new transaction),
> >
> > So that is not a Transaction. That states that the freaky thing is
> > always in a "transaction" mode of its own definition. This tells me
> > that nothing has changed since V10. Functions (oopsey, the now
> > sort-of stored procedure thingees) are "twunsackshunal". START
> > TRANSACTION still does Fanny Adams.
>
> Can you elaborate on that?

PoopDiPoopGres totally, utterly, abysmally, does not comply with SQL/ACID. (We know from previous discussions that it is not SQL compliant in any way shape or form, we know from the Syntax that it is not SQL, now we know that it is not ACID compliant in any way; shape; or form.)

You guys have the most ridiculous notions of /Transaction/, and of /ACID/.

“automatically beginning a new transaction” means that Transactions are *NOT* started by START TRAN and ended by COMMIT/ROLLBACK TRAN, as required for SQL compliance.

“automatically beginning a new transaction” means it has some sort of private “transaction” mode, that it is always “in”, contrary to the SQL requirement.

Therefore, START TRAN, which is required to start a Transaction, and switch from whatever Isolation Level the connection is in, to IL[3 SERIALIZABLE], as required for SQL compliance, does not happen.

Further, it has no Concurrency Control (the [CC} in “MVCC” is a bald-faced lie). Therefore it has no basis upon which to suggest that it can, let alone does, provide TRANSACTION ISOLATION of any kind. Hint: every user is “isolated” simply because they hold a stale, offline version of the online data row. Note the difference between ISOLATED and “isolated” in this para.

Therefore:
> > START TRANSACTION, which has SQL compliance requirements, still does sweet Fanny Adams.

What is worse, all the academics, and all the developers on this hysterical MV-non-CC suite of programs that change with every version, are totally clueless re OLTP requirements; Transactions; SQL; and SQL/ACID Transactions. But they have hundreds of pages of academic “literature” that tells that them that they do, and that their hysterical ever-changing notions are “better”.

Again, in the context of this thread ...

> If I do:
>
> start transaction; [T0]
> -- Some stuff
> commit;
>
> select ...; -- (1)
>
> insert ...; -- (2)
>
> start transaction; [T3]
> -- Some stuff
> commit;
>
> don't (1) and (2) constitute each a transaction?

No.

This is more about simple Logic, than about ACID Transaction definitions.

This is the same insanity as in the other thread, where you proposed that “business transactions” are made up of multiple “database transaction”. No, there is only one type of Transaction in a SQL/ACID Platform. The problem is the same: you break the Atomicity (the [A] in ACID), and worse, you do not even realise it.

Here, you additionally fail to understand the Whole vs the Parts, which is also a common error that you make. You think that a Part is equal to (has the power and characteristics of) the Whole. You break the Atomicity that I have posted about many times.

If [T0} and [T3] are (a) Transactions, and (b) Atomic, how in heavens name can single verbs [1][2] also be (a) Transactions, and (b) Atomic ???

How can a single Part be the same as a constructed Whole (that contains Parts) ???

No. [1][2] are single SQL Verbs, they do not “constitute” a Transaction, let alone an SQL/ACID Transaction.

They are your “primitives”, as per the /Transaction/ thread, and yes, they should not be allowed. Easily prevented.

If and when a Transaction is constructed, sure, it contains single SQL verbs, and in that case only, the word /constituent/ can be used, because it is truly constituted. “Constituted” is not something that you can otherwise use as a label or property.

> On Monday, 5 July 2021 at 02:02:33 UTC+10, Nicola wrote:
>
> Syntactically, anything between "start transaction" and "commit".

Well, if you understand that, then you should understand that [1][2] are *NOT* SQL Transactions. They are *NOT* constituted by bracketing with START TRAN and COMMIT TRAN. You are contradicting yourself violently.

> Single commands are run as if enclosed between "start transaction"/"commit".

I would not put it that way, but sure, that is some implementation, an attempt at writing a bunch of programs that emulate some degree of SQL Transactions. If you focus on the “as if”, then you should understand that they are *NOT* Transactions, because “as if” and /actually/ are mutually exclusive processes.

> If I understand correctly, by "transaction" you mean what is executed by
> the whole stored procedure, which according to your sketched template
> consists of several "transactions" in the sense above.

Definitely not.

I do not have private definitions for anything in this science, the definitions have existed since 1965; 1984 for RDBMS; SQL-89 for SQL Compliance, and remain unchanged.

Can you not tell the difference between the occurrence of /Transaction/ in the type /Transaction Stored Proc/, which differentiates it from other types of stored procs, and the occurrence of /Transaction/ in START TRANSACTION ? In that case, you would not be able to appreciate the Transaction Stored Proc Template, and the value of its constituent Parts.

Are you getting confused with the naming <TransactionName>_tr ? Do you honestly think that because a stored proc is so named, everything inside it is a Transaction, which must mean that you deny the Transaction in it ???

No. The Transaction in the Transaction Stored Proc Template begins at the single START TRAN, and ends at the single COMMIT/ROLLBACK TRAN. The rest of the Transaction Stored Proc Template (ie. outside the START::COMMIT bracket) constitutes the Parts that are required for Online Transaction Processing. The Transaction Stored Proc Template gives the frame; the Form, that is required for ACID Transactions in an OLTP context. Which is beyond SQL/ACID, and must be executed by the app, which calls a method in the Database API, which is a Transaction Stored Proc.

I cannot see how the Transaction Stored Proc Template can be construed to be a Transaction, or even the contained Transaction.

I can offer, that it is the academic, insane /always in some “transaction” mode/ mindset, that screws up your thinking.

Please, in this thread, maintain the established terms and definitions.

Cheers
Derek
Message has been deleted

Derek Ignatius Asirvadem

unread,
Jul 5, 2021, 8:00:03 AM7/5/21
to
Dan, Nicola

This week is super-busy for me, I will be offline.

I have endeavoured to clean things up, so that you can proceed unhindered. I have also provided a doc that illustrates the problems in *Full Context*, hopefully to avoid dealing with fragments out of context. To restate the particular steps required to progress this thread to closure:
__ identify two more Concurrency problems that can't be prevented by SQL or ACID Transactions, that need to be prevented in an OLTP context (items that are well-known since 1965)
__ then, next week, I will provide the one solution that prevents both.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 23, 2021, 8:36:02 AM7/23/21
to
Dan, Nicola

> On Monday, 5 July 2021 at 22:00:03 UTC+10, Derek Ignatius Asirvadem wrote:

I am back, please forgive the delay. It is madness over here. Worse lockdown conditions in two years. Nothing like a plandemic and a fear-mongering media, slaves of Big Pharma.

Two weeks and no activity. Come on, guys, this thing needs interaction. Proposal. Question. What if.

We have left the me, me, me-centred toilet, we have stopped looking in the mirror and repeating "I am a chick magnet". We have entered the big bad world of reality, where conflicts are no big deal because we are social, civilised people, and we have civilised ways resolving conflicts. No need to stay at home and clutch the obsolete private version of the database.

OLTP (learned some, more to go)
ACID (learned somne, more to go)
Transactions (good start)
SQL

We know about Phantoms; Anomalies; Deadlocks. We can dismiss "Serialisation Anomimimilies" as the particularly stupid way that people who think they have invented the wheel collide with the reality of the road.

What else could go wrong ?

What precisely does [D]urable mean ?

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 24, 2021, 5:48:47 AM7/24/21
to
Guys and dolls, lays and germs, boys and girls

> On Friday, 23 July 2021 at 22:36:02 UTC+10, Derek Ignatius Asirvadem wrote:

> Two weeks and no activity. Come on, guys, this thing needs interaction. Proposal. Question. What if.
>
> We know about Phantoms; Anomalies; Deadlocks.

We did, from 1965. There are four categories of error that we know about and have solutoins for, those are two out of four.

The academics found out about it in somewhere in 2002.

> We can dismiss "Serialisation Anomimimilies" as the particularly stupid way that people who think they have invented the wheel collide with the reality of the road.

That is not one of the four categories, it is a unique feature of PissMyselfGross. MySQL and Oracle do not have it. But hey, the academics have written about 100 papers about it, so they think they invented it (in denial of the fact that other MV-non-CC program suites Ingres/Oracle/MySQL had had it for 20 years before they noticed).

> What else could go wrong ?

Two down, two to go.

> What precisely does [D]urable mean ?

>>>>
https://www.ibm.com/docs/en/cics-ts/5.4?topic=processing-acid-properties-transactions

Durability
After a transaction successfully completes, changes to data persist and are not undone, even in the event of a system failure.
<<<<

The OO/ORM/OOP boffins and academics (lazy and vociferously ignorant) think that, oh, the platform or program suite supplies that, we need do nothing. It helps to blow your nose and smell the coffee. Then go outside and socialise with people. Who actually use an online shared database.

It is true that the system failures will be taken care of, by the platform/program suite, to the extent that redundancy; etc has been implemented by the customer. But what about the Transaction itself, is it [D]urable ?


Consider:

Person has a Credit Rating [A] which allows $10,000 overdraft
Person fails to keep agreements re covering their overdraft

Time[ 1 ]
Credit Manager intends to change Credit Rating [D]; $0 overdraft
He is using a GUI, the duration between retrieval & painting the data into the GUI, and the moment he hits the <Save> button in uncontrolled (cannot be controlled). Toilet break; take a phone call; check policy docs; etc.

Time[ 2 ]
Person has a balance of $5,000 DR (overdrawn)
Person phones bank clerk
Checks that he still has Credit Rating [A]
Changes his address
Bank officer is using a GUI, same issues.

Time[ 3 ]
Credit Manager hits <Save>, which COMMITS his changes.
Moves on to next task.

Time[ 4 ]
Bank officer hits <Save>, which COMMITS her changes.

Time[ 5 ]
Person attempts to withdraw $2,000 at the bank
Bank teller checks his account and status.
System states his balance is $5,000 DR (overdrawn), Credit Ratings [A].
Provides $2,000 in cash, new balance $7,000 DR overdrawn.

What the hell happened ? Is that a Durable Transaction ? No way. Not an acceptable system.

This, lays and germs, is known, since 1965, as the
-------------------------
-- Lost Update --
-------------------------
problem. The third of four categories of error, to be handled, in any OLTP system.

Note that the server/program-suite cannot prevent this. Not a genuine OLTP server, let alone one hundred programs cobbled together. Not even an ACID-compliant server, let alone a bunch of programs that implement a strange notion of “transaction”.

The academics not only DON’T KNOW about this, or any other OLTP consideration, when I informed them 11 years ago (during my three years of hard labour at the Hey Presto, the Torrid Manifesto gulag), they did not understand it. The established /If I did not invent it, it does not exist/ syndrome that cripples academics. They had no interest in the solution.

>>>>
Exclusions.

1. We are not arguing about the right or wrong way {all SQL in the GUI; Validate & Exec in the GUI, Xact sp in the database; any other method}. (We will have a resolution at the end.) Feel free to declare what it should be. Discuss.

2. Let’s not have an argument about this one item, because it is laborious, and I have done it to death about 30 times. In most cases, for reasons of simplicity, for an UPDATE, either the fragmented SQL code in the GUI xor the atomic SQL code in the Transaction sp, would UPDATE all attribute columns. So whether it is the Credit Manager changing this, or the bank officer changing that, it is writing all the attribute columns that are held in that client GUI.

The idea in the example above is, the bank officer’s changes over-wrote the Credit Manager’s changes.
<<<<

Doc updated. More detail and a Data Model:

____ https://www.softwaregems.com.au/Documents/Article/Database/Transaction/Transaction%20Sanity.pdf

I have not received any comments about the earlier version (two pages). Please feel free.

Cheers
Derek

Daniel Loth

unread,
Jul 24, 2021, 6:47:50 AM7/24/21
to
Hi Derek,

Welcome back. I'll reply to things you've wrote across both of your emails.

---

> What precisely does [D]urable mean ?

It means that the outcome of the transaction has been successfully committed.
So it has been written to disk (and potentially one or more replica / follower nodes), and won't be lost should the system crash (or the power go out).

---

> Two down, two to go.

I know one is lost updates, though I see you've addressed it in your email.

I read the PDF you put together years ago: http://www.softwaregems.com.au/Documents/The%20Third%20Manifesto/Transaction%20TTM%20101122.pdf

Now I'm not entirely sure if it's in the same class of problem as anomalies / deadlocks / lost updates, but since you've spoken about sociable behaviour already I'm going to go out on a limb and say 'contention management'.

With proper contention management leading to higher performance, and poor contention management (e.g.: gratuitously acquiring locks needlessly) leading to low performance.

And I suppose the answer to that is 'optimistic locking', which we've discussed.
Optimistic locking being achieved by the retrieve-validate-execute pattern we've also discussed.

---

Regarding the Transaction Sanity document, I read through it when you first shared it two weeks ago.

I'm already reasonably familiar with some of the items described.

For example, I know the bit about 'Client Application (Multiple)' ties in with your Open Architecture document here: http://www.softwaregems.com.au/Documents/Article/Application%20Architecture/Open%20Architecture.pdf

I'm also reasonably across the notion of fully constraining the data using declarative constraints, and the methods that can be used to achieve this (e.g.: enforcing exclusive basetype-subtype using check constraint, as documented here: http://www.softwaregems.com.au/Documents/Article/Database/Relational%20Model/Subtype.pdf

The notion of 'ResultSet (multi-statement) Integrity' I'm not so sure about.
That's probably the one concept that I haven't encountered elsewhere in your writing. Or at least I haven't encountered it named as such.

---

Cheers,
Dan

Derek Ignatius Asirvadem

unread,
Jul 25, 2021, 8:33:22 AM7/25/21
to
> On Saturday, 24 July 2021 at 20:47:50 UTC+10, Daniel Loth wrote:
>
> Welcome back.

Thanks !

> I’ll reply to things you've wrote across both of your emails.

It appears you have read this post:

__ > On Friday, 23 July 2021 at 22:36:02 UTC+10, Derek Ignatius Asirvadem wrote:

but not this post:

__ > On Saturday, 24 July 2021 at 19:48:47 UTC+10, Derek Ignatius Asirvadem wrote:

Also, it may be that you do not have the updated doc (4pages), the earlier version was 2 pages. The Lost Update problem is fully defined with an example Data Model and detailed code.

__ https://www.softwaregems.com.au/Documents/Article/Database/Transaction/Transaction%20Sanity.pdf

> ---
> > What precisely does [D]urable mean ?
> It means that the outcome of the transaction has been successfully committed.
> So it has been written to disk (and potentially one or more replica / follower nodes), and won't be lost should the system crash (or the power go out).

The IBM link I gave for reference:
__ https://www.ibm.com/docs/en/cics-ts/5.4?topic=processing-acid-properties-transactions
states:
__”After a Transaction successfully completes, changes to data persist and are not undone, even in the event of a system failure.”

1. There is no mention of COMMIT (although we know that “completes” means successful COMMIT).
2. The declaration is quite different, in that Durable means the Transaction persists, rather than about system failure, or how that i accomplished.

So did the example Transaction persist, or was it wiped out by a similar Transaction ?

- by your definition of [D]urable, the Transaction is fine, the Lost Update is unknown (happens all the time, and people are ignorant that it happens).

- by the IBM definition, the Transaction failed, it did not persist, it is not [D]urable. The Lost Update is known.

But how is it prevented ?



> > Two down, two to go.
> I know one is lost updates, though I see you've addressed it in your email.

Correct.
Crikey. A bit of context is demanded.

>>>>
Eleven years ago, I had just come out of pure high-end consulting (ignorance of the madness that was happening in the 95% of the market), and entered into helping people outside my ambit. I had not had the pleasure of reading the literature (no need because Cincom/TOTAL and then Sybase/SQL Server [now Sybase ASE] were decades ahead of it). I had the starry-eyed view that Date; Darwen; Fagin; etc, were well-intentioned in their promotion of error as “relational”, and merely ignorant academics who were divorced from the industry due to the declaration that theory should not concern itself with implementation. So I took to TTM to engage and inform them, to clear the ignorance. That is, I too, believed in the marketing that they were the “curators” of the /Relational Model/, after all the SQL Committee were treating them as such, and making stupid changes.

In hindsight, that was my first crusade, I had made an awful lot of money and I wanted to give something back to my profession; I was responsible enough as a scientist, to correct errors; and I went to the source of those errors and addressed them directly.

It took three years of hard labour at the TTM gulag, for me to realise:
- they are stupid academics, caught up in their academic denial of reality, so that they can manufacture a collective subjective “reality”
- they practice, and teach, schizophrenia, not science, which they have no interest in
--- worse, they suppress science, and market schizophrenia. Eg. they argue for years about any particular subject, resolving nothing (no Four Laws of Thought)
- they never had any impact on the implementation platforms, and due to their evidenced capability, they never will
--- (the exception is of course Codd, but he was not one of them, he was typical of the scientists who worked for the DBMS implementation platforms, and progressed them. He had specific implementation goals. The /RM/ is a progression of HDBMS and NDBMS, not at all an entirely new and different thing. Yes, of course it is the first to have a defined theoretical basis; a mathematical definition; it is based on FOPC.)
- they hate Codd. They take every opportunity to deride him and demean his work. the /RM/, 80% of which is completely false and dishonest (eg. Straw Man arguments), and 20% stupidity (eg. expecting the /RM/ to teach database design and how to tie their shoelaces). Even an undergrad knows that a paper defines the proposal, not everything prior that the proposal depends upon.
- and thus that they were intentionally suppressing the Logical /RM/; intentionally articulating Physical 1960’s Record Filing Systems as “relational”, as a “progression of the /RM/“, and fraudulently using RM/T and the 1971 paper as a basis for that.
- at the top of the food chain, it was not the entrenched stupidity and ignorance of academics, which all of them suffer from and protect, but purposeful evil: they are propagating anti-science
- all the textbooks are written by such freaks, all the so-called professors teach such filth without examination, deeming themselves too, as schizophrenic anti-scientists
- to remain divorced from the theory for FIFTY YEARS; to remain ignorant of actual Standards and implementation platforms for FORTY YEARS, does not happen accidentally, or as a consequence of stupidity and ignorance, no, that happens only by design

During that three years, I innocently and diligently addressed each of the several idiocies that TTM is: “deferred constraint checking”; “headings”; “relvars”; etc. Then there was the business where they were going to “define” “transactions”. Oh wait, for FORTY YEARS at that point they divorced themselves from implementation concerns and had nothing to say about implemented SQL platforms that supplied ACID Transactions, but now (then) that they were defining a Torrid Manifestival (manifestation of fantasy) to replace SQL, they were going to define “transactions”. From scratch. Reinventing the wheel. On the pretended basis that it did not yet exist. An imaginary piece of string that connects unidentified verbs together.

So I tried desperately, to tell them about considerations in an OLTP context (rather than the isolated tiled room they insisted is “reality”). As always with academics, at first, they expressed interest and engaged, but less than half-way through they just stopped engaging. SO nothing was resolved, which is their end game, they need that non-resolution, so that they can circle-jerk and argue amongst themselves without producing a single thing as they have done for FIFTY YEARS (then FORTY YEARS).

They wanted to drop ACID, per per one of the chief sow-suckers papers, and they were fantasising just like they had dropped some, and it was a bad trip. But they were partying in the sow stall in company, so they had a lot of confidence in their collective fantasy. The only Dates he has had is with a 300 pound sow. Nothing real; nothing concrete or “concrete”; the “definitions” kept changing every time I brought up a point from Reality.

As you can see from the Transaction Grid, I was entertaining their fantasy (Green, “RM TTM/D”) and comparing it against the ACID Transactions in implementations since 1965. The “RM” in the heading is of course false: it was a concession at the time, as I had not yet determined their evil intent.

On that document, there was another six pages to go, to achieve closure (either a fixed definition of TTM/“transactions”, xor them conceding that ACID Transactions were established since 1965, and again in the 1980’s on SQL platforms, and it was unshakable.
<<<<

Please ignore that doc. It had a specific purpose, a drawn-out discourse and argumentation re TTM/“transactions”.

Please be assured, I am giving you a complete discourse in this thread, as requested, without the interference of the TTM or academic insanity. This thread and the doc I have given here, is far more detailed, and will be complete at the end.

But the cat is out of the bag. The problem now is, you have knowledge of the label of the problem (third of four categories), and even the label for the solution (for all four), but you do not yet understand the problem properly, which is an essential requirement for coding for OLTP without errors (just coding per a Template does not work, understanding it is essential). I am giving the course I give to customers, minus the slides and a bit of proprietary stuff. The course has not changed since I first created it in 1993, it is stable, it has a sequence, and the sequence has a purpose: incremental learning and thus full comprehension.

> Now I'm not entirely sure if it's in the same class of problem as anomalies / deadlocks / lost updates, but since you've spoken about sociable behaviour already I'm going to go out on a limb and say 'contention management'.
>
> With proper contention management leading to higher performance, and poor contention management (e.g.: gratuitously acquiring locks needlessly) leading to low performance.

Yes, correct.

Not only acquiring and holding more locks than necessary, but reducing (a) the locks held, (b) the lock duration, (c) holding locks at the highest level, thus holding fewer locks, and (d) preventing deadlocks with an Access Sequence.

----

I see that the four categories of OLTP considerations/problems well-known since 1965, and solved 1965 for mainframes; 1984 for SQL platforms, is not entirely clear. I will clarify that in the next version of the Transaction Sanity doc. To identify that here, and mark the progress of this thread:

1. Phantoms & Anomalies
This category is well-known because it was documented in the IBM/CICS/TCP; Britton-Lee; Sybase; MS; DB2 manuals. So I have not expanded it here. Even the academics got to know about it about two decades afterward, it appears in their “literature”.

2. Lost Update
Defined somewhat in my response to TTM. Defined in full in the Transaction Sanity doc. We need to continue the interaction until it is fully understood.

Question. What is the *BASIS* for the user having confidence that their Transaction will (should) work ?
Stated otherwise, why did the Lost Update (page 4) happen ?
How do the naïve developers who use an OLTP/SQL platform (especially if they have only MV-non-CC+application locking mindset) prevent Lost Updates ?

3. <Not Identified>
[B.2] in the Transaction Sanity doc.
Interaction and discourse to proceed when [2] is complete.

4. DeadLock
Explained, covered, solution given.

----

> And I suppose the answer to that is 'optimistic locking', which we've discussed.
> Optimistic locking being achieved by the retrieve-validate-execute pattern we've also discussed.

Yes, we have discussed it some. No, it is not complete. The Template is about 75% complete.

> Regarding the Transaction Sanity document, I read through it when you first shared it two weeks ago.

Great.
Note, it was 2 pages, it is now 4 pages, please download and read.

> I'm already reasonably familiar with some of the items described.
>
> For example, I know the bit about 'Client Application (Multiple)' ties in with your Open Architecture document here: http://www.softwaregems.com.au/Documents/Article/Application%20Architecture/Open%20Architecture.pdf

Excellent.

> I'm also reasonably across the notion of fully constraining the data using declarative constraints, and the methods that can be used to achieve this (e.g.: enforcing exclusive basetype-subtype using check constraint, as documented here: http://www.softwaregems.com.au/Documents/Article/Database/Relational%20Model/Subtype.pdf

Excellent.

That means you understand that the notion of logic (“business rules”; “database rules” “transaction logic”; etc) that is deployed in the client or middle tier is hysterically stupid (but heavily prescribed by the academics, who are, as evidenced, clueless about SQL). The database in completely self-contained; entirely independent; a single recovery unit.

> The notion of 'ResultSet (multi-statement) Integrity' I'm not so sure about.
> That's probably the one concept that I haven't encountered elsewhere in your writing. Or at least I haven't encountered it named as such.

Phantoms & Anomalies
We don’t want rows that appeared at the top of a ResultSet to have disappeared or changed by the time the server reads the rows at the end of the ResultSet.

__ Statement level integrity
I think you understand that.

__ ResultSet level integrity
We do not want disappeared or changed rows specially if the row at the end is a SUM() of rows prior, such as in a typical bank statement or invoice. Eg. the bank statement or invoice is achieved by
____ SELECT detail rows account and period
____ UNION
____ SELECT SUM() for account and period
In a report, as distinct from the bank statement for external publication, one may list all accounts.
Thus integrity of rows is required across Statements that form a single ResultSet.

The *SQL* prescribed method is:
__ ISOLATION LEVEL 2 REPEATABLE READ

But that is not reasonable unless a Transaction is opened, and we don’t want to go into full transaction mode for a SELECT, so the way each SQL Platform implements that is slightly different, but the contention reduction is vastly different. Without having to think too much, most people use:
____ BEGIN TRAN -- now we are holding locks until COMMIT
____ SET TRANSACTION ISOLATION LEVEL 2 -- default is 3
____ SELECT ...

Sybase (and MS, at least in the older versions that I have used) allows *SELECT* AT ISOLATION REPEATABLE READ, thus even in a transaction, one can reduce the Isolation Level on each SELECT. For Relational tables (genuine Clustered Index),
Sybase & MS executes Level 2 at Level 3, regardless of a Transaction being opened or not.
Basically, within a Transaction, one can request a lower degree of locks; outside a Transaction, one can request a higher level.

The freeware and Oracle do strange things, as they are neither SQL nor ACID compliant.

__ ResultSet level integrity/SG
Most of our customers are large Aussie banks (require the bank statement report of all accounts). Having internal knowledge and proprietary methods, we provide ResultSet integrity at READ COMMITTED, meaning less locking, and a lower degree of locks, outside a Transaction.

Transaction Sanity doc updated (minor clarifications made as per above).

Cheers,
Derek

Derek Ignatius Asirvadem

unread,
Jul 25, 2021, 9:08:26 AM7/25/21
to
> On Sunday, 25 July 2021 at 22:33:22 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Saturday, 24 July 2021 at 20:47:50 UTC+10, Daniel Loth wrote:
> >
> > I read the PDF you put together years ago: http://www.softwaregems.com.au/Documents/The%20Third%20Manifesto/Transaction%20TTM%20101122.pdf
>
> Crikey. A bit of context is demanded.
>
> >>>>
> In hindsight, that was my first crusade, I had made an awful lot of money and I wanted to give something back to my profession; I was responsible enough as a scientist, to correct errors; and I went to the source of those errors and addressed them directly.
...
> <<<<

Put another way, I gave the freaks full definition of Lost Update, and the full definition of the solution Optimistic Locking, eleven years ago ... but the drooling idiots still have not got it. Codd and I suffer the same thing from them: the Not Invented Here Syndrome. We are not pure academics, masturbating over our fantasies that have no connection to reality. They ignore and suppress Reality; the genuine SQL platform implementations; Transactions; ACID; anything they did not invent themselves, even if it is fifty years after it has been implemented in Reality.

Cheers
Derek

Daniel Loth

unread,
Jul 26, 2021, 8:10:33 AM7/26/21
to
Hi Derek,

> So did the example Transaction persist, or was it wiped out by a similar Transaction ?
> - by your definition of [D]urable, the Transaction is fine, the Lost Update is unknown (happens all the time, and people are ignorant that it happens).
> - by the IBM definition, the Transaction failed, it did not persist, it is not [D]urable. The Lost Update is known.

So you're talking about notions of physical durability (i.e., my data is written to non-volatile storage and will endure) -- versus -- logic durability (i.e., my data is written and other users of this system who might be operating on the same data will not clobber it without being informed that my transaction has occurred in the period of time between them viewing the data and attempting to act upon it).

> But how is it prevented ?

In Microsoft SQL Server the most robust way is using the ROWVERSION data type, which is just a very large monotonic integer that is automatically updated when modifying a record in a table with a column of type ROWVERSION. It's essentially timestamp based concurrency control.

Of course, you needn't use ROWVERSION specifically. You could conceivably use some other very large value (a GUID / UUID for example) so long as you a) update it on all occasions where you're modifying the data it protects and b) always check it before proceeding with a data modification.

---

Thanks for the background re: the TTM discussion. I came across it while looking at other stuff, and it was actually quite eye-opening at the time. In fact that's how I came across the notion of timestamp based concurrency control.

---

> Question. What is the *BASIS* for the user having confidence that their Transaction will (should) work ?

In a well-designed system? The knowledge that the system won't allow them, or anyone else, to unknowingly overwrite someone else's changes.

In an abstract sense? Maybe they've had some degree of exposure to notions such as 'Durability' and, in laymen's terms, perceived it as something that will safeguard their modifications from data loss.

---

> How do the naïve developers who use an OLTP/SQL platform (especially if they have only MV-non-CC+application locking mindset) prevent Lost Updates ?

The naive approach would be to acquire and hold locks for the duration of user activity right? That is, acquire and hold some kind of lock at the time a user opens a screen to view some data, and then release that lock when the user finishes with the screen.

Though in my personal experience, I haven't seen this.

I have instead seen:
1. No attempted lost update prevention at all. Last update wins.
2. Timestamp based concurrency control, but weakly implemented. For example, the notion exists but might not be used appropriately in all places in application code. Or alternatively, the breadth of data protected by a particular timestamp is too coarse or too fine. Or alternatively again, there is confusion about which timestamps protect which data from lost updates.

---

Cheers,
Dan

Derek Ignatius Asirvadem

unread,
Jul 26, 2021, 7:59:40 PM7/26/21
to
> On Monday, 26 July 2021 at 22:10:33 UTC+10, Daniel Loth wrote:
>
> > So did the example Transaction persist, or was it wiped out by a similar Transaction ?
> > - by your definition of [D]urable, the Transaction is fine, the Lost Update is unknown (happens all the time, and people are ignorant that it happens).
> > - by the IBM definition, the Transaction failed, it did not persist, it is not [D]urable. The Lost Update is known.
>
> So you're talking about notions of physical durability (i.e., my data is written to non-volatile storage and will endure) -- versus -- logic durability (i.e., my data is written and other users of this system who might be operating on the same data will not clobber it without being informed that my transaction has occurred in the period of time between them viewing the data and attempting to act upon it).

(I am not adding to, or subtracting from, the IBM definition.)

The distinction you make between logical vs physical muddies the water, so remove it. Think: online database containing a single version of the truth, of any one fact. Yes, that is logical, and we don’t have to worry about the physical (which is site- or server-specific).

You are focused on the notion that persistence is provided by the physical. It is not. If you hold any OO/ORM/OOP connotations re /persist/, release them. Here /persist/ means only the English meaning plus the technical scope.

Second, don’t think in terms of /data/, which is fragments, and tends toward the physical. Think in terms of Atoms, Transactions, which is the Logical. Users do not know about, and couldn’t care less about, the physical.

So, taking only the Logical in your response:
> logic durability (i.e., my data is written and other users of this system who might be operating on the same data will not clobber it without being informed that my transaction has occurred in the period of time between them viewing the data and attempting to act upon it).

Yes, that is the result, working backwards.

We need to understand it forwards. His Transaction completed and it persists ... and therefore he has confidence in the system. XOR his Transaction completed, but it did not persist ... and therefore he has no confidence in the system. Transaction Sanity/p4. Was the Transaction Durable or not ?

> > But how is it prevented ?
> In Microsoft SQL Server the most robust way is using the ROWVERSION data type, which is just a very large monotonic integer that is automatically updated when modifying a record in a table with a column of type ROWVERSION. It's essentially timestamp based concurrency control.

Correct.
But that is not robust, it is a large overhead. A data-level TimeStamp (DATETIME datatype) is the most robust. We have had that since 1965, 1984 for RDBMS, forty years before MS implemented ROWVERSION.

Much like the massively resource-eating HIERARCHYID. We have had hierarchies and full exposition of hierarchies in databases since 1965, 1984 in RDBMS, which is zero overhead. No CTEs; no WITH. Forty years before MS gave us HIERARCHYID; WITH; CTEs.

> Of course, you needn't use ROWVERSION specifically. You could conceivably use some other very large value (a GUID / UUID for example) so long as you a) update it on all occasions where you're modifying the data it protects and b) always check it before proceeding with a data modification.

For the concept, yes.
For the implementation, definitely not, because (a) GUID; UUID; etc has the purpose of establishing uniqueness, which we do not need, (b) it is very high overhead. Just a DATETIME (millisecond resolution is just fine, no need for microseconds, which is useful for actual row version). The column may be named TimeStamp; UpdatedDtm; something meaningful ... certainly not RowVersion.

Considering portability, ROWVERSION is not SQL, TimeStamp[ DATETIME ] is. Nothing to change when porting.

> Thanks for the background re: the TTM discussion. I came across it while looking at other stuff, and it was actually quite eye-opening at the time. In fact that's how I came across the notion of timestamp based concurrency control.

Good to know that someone benefits from my efforts.

> > Question. What is the *BASIS* for the user having confidence that their Transaction will (should) work ?
> In a well-designed system? The knowledge that the system won't allow them, or anyone else, to unknowingly overwrite someone else's changes.
>
> In an abstract sense? Maybe they've had some degree of exposure to notions such as 'Durability' and, in laymen's terms, perceived it as something that will safeguard their modifications from data loss.

It is not a /maybe/. The app & database provider tells them that: “we provide ACID Transactions” (it is articulated in the standard SG contract, which refers to our Trade Practices Act 1974; “fitness for purpose”; etc).

Second, in their user training, the issue will come up; be discussed; be resolved. SO it is not just abstract, it is logical and explicit.

The mickey mouse app & db providers; all who follow the academics or MV-non-CC, are clueless, they rely on the ignorance of the definition of [D]urable, that it pertains only to the physical (the system is durable). They do not know that Transactions have to be [D]urable.

You have gotten very close, but not directly answered this:
> > Question. What is the *BASIS* for the user having confidence that their Transaction will (should) work ?
I am provoking a thought process in your mind. Focus on *BASIS*. Here you have to get rid of any MV-non-CC or “MVCC” notions because they are bankrupt. Think in terms of Reality, an online shared database that contains a single version of any single fact.

I am trying to get you to experience an illumination, which is difficult in this medium. At best, you will further recognise that an MV-non-CC or “MVCC” mindset is schizophrenic, that it denies reality, and that the *BASIS* for confidence in a true OLTP system, which is ordinary, is denied.

Thus the *BASIS* for confidence is the principle. The basis for confidence of any retrieved data, in any retrieval, is not the data itself, or it accuracy or its veracity, but the TimeStamp.

This is true even for paper-based systems. Eg. the Lost Update cannot happen in a paper-based system.

>
> ---
> > How do the naïve developers who use an OLTP/SQL platform (especially if they have only MV-non-CC+application locking mindset) prevent Lost Updates ?
> The naive approach would be to acquire and hold locks for the duration of user activity right?

Yes.

That is, acquire and hold some kind of lock at the time a user opens a screen to view some data,

Yes.

and then release that lock when the user finishes with the screen.

By “finish”, if you mean the Transaction completes, yes.

That means:
- BEGIN TRAN is done in the client
- locks are held for an uncontrolled duration (coffee; phone calls; toilet breaks; etc)
- the system will be lock-bound
- guaranteed contention
Therefore the app & db totally fails OLTP

Further, it fails [A]tomic, because the string of SQL between BEGIN and COMMIT is not controlled; not a single contiguous code block. It is spread across various client-side objects, in terms of execution, it is spread across the network.

>
> Though in my personal experience, I haven't seen this.

That depends on whether you have worked on any systems that declare “OLTP” or not. Eg. in the “MVCC” mindset, which is actually MV-non-CC, it happens all the time, because they have to *additionally* implement Concurrency Control via manual locking.

More than 80% of the systems we have replaced, the Version One, and half of that was commercial products, have this problem. Generally terrible Transaction implementation, and some degree of naïve locking to overcome [found instances of] failures.

In MySQL, the manual locking is correct, one provides an user-level lock name, such that it is entirely in the app/database, and entirely without the server. Safe.

In PissGress, manual locking engages the Lock Manager which sits on top of the MV-non-CC, guaranteeing interference bewteen server-level locking and app+dd level locking. Dangerous and stupid.

> I have instead seen:
> 1. No attempted lost update prevention at all. Last update wins.
> 2. Timestamp based concurrency control, but weakly implemented. For example, the notion exists but might not be used appropriately in all places in application code. Or alternatively, the breadth of data protected by a particular timestamp is too coarse or too fine. Or alternatively again, there is confusion about which timestamps protect which data from lost updates.

Ok. So you have some experience of the methods; and the poor or partial implementations. Typically that happens when someone finds a good template but does not fully understand the problem, or the solution. Which is why I stress that in our discussion.

We have completed [B.1], if you are clear about the *BASIS* above, ...

----

you are ready for [B.2], the fourth and last category of known and preventable errors in an OLTP context. Transaction Sanity/page 5. Note the changes on p2.

__ https://www.softwaregems.com.au/Documents/Article/Database/Transaction/Transaction%20Sanity.pdf

Please discuss.

Cheers
Derek

Daniel Loth

unread,
Jul 27, 2021, 6:59:24 AM7/27/21
to
Hi Derek,

Thanks for the response and continued discussion.

---

>We need to understand it forwards. His Transaction completed and it persists ... and therefore he has confidence in the system. XOR his Transaction completed, but it did not persist ... and therefore he has no confidence in the system. Transaction Sanity/p4.

> Was the Transaction Durable or not ?

No. Neither the User 1 or User 2 transactions make use of the timestamp based concurrency control.

---

> Correct.
> But that is not robust, it is a large overhead. A data-level TimeStamp (DATETIME datatype) is the most robust. We have had that since 1965, 1984 for RDBMS, forty years before MS implemented ROWVERSION.

If you're using a clock-based timestamp such as DATETIME, instead of a server-wide monotonic integer (which is what the Microsoft ROWVERSION is), what do you do to handle the possibility of clock skew on the database server?

Do you do anything, or do you just take your chances? I recognise that the situation is extraordinarily rare, and you may typically dismiss it on that basis.

On the other hand, I noticed that the procedure 'Dumb_Withdraw_tr' actually takes @UpdatedDtm as a parameter, which conceivably is another source of clock skew when the caller specifies the time (or even lies about the time).

---

> I am provoking a thought process in your mind. Focus on *BASIS*. Here you have to get rid of any MV-non-CC or “MVCC” notions because they are bankrupt. Think in terms of Reality, an online shared database that contains a single version of any single fact.

So what you're getting at, especially with the mention of paper-based, is: It's inconceivable that you could inadvertently clobber someone else's change to that single piece of paper without being cognisant of that. After all, they've written on the paper and put it back, and then you've picked it up and looked at it. Obviously being a single piece of paper, without duplicates, access to the piece of paper involves queuing up for it and waiting your turn.

---

I'll take a look at that updated PDF now, though I might not respond again until tomorrow.

Cheers,
Dan

Nicola

unread,
Jul 27, 2021, 7:37:57 AM7/27/21
to
Hi all,
I am back, too, and trying to catch up with the discussion, but not
there yet.

Derek, I have read your Transaction Sanity document. When academics
discuss "transactions", they do so in the context of what eventually
becomes a start transaction… commit block in implementations. For
instance, they ask themselves: under what conditions are lost updates
(and other anomalies) be avoided when:

-- Transaction 1
start transaction;
-- Perform some operations, including updates
commit;

-- Transaction 2
start transaction;
-- Perform some operations, including updates
commit;

are executed concurrently? Admittedly, the examples that are usually
found in textbooks are quite misleading: the classical fund transfer is
used because the context is readily understood, but it should be taken
with a grain of salt, and certainly not as a way to suggest how a real
banking transaction should be implemented. It's naive to think that the
"naive solution" in your document is what a (good) teacher would suggest
as a practical way to solve the problem stated in that document. To make
the point clearer (and simplifying things a bit): the transaction theory
that we teach, and which you can find in (good) textbooks, is the theory
you would need to write a transaction manager for a DBMS (i.e., the
theory needed by Sybase developers), rather than the theory needed to
correctly implement the transactions for a banking system or any other
application, which builds on top of the former and is seldom, if ever,
discussed in textbooks. Hence, I am eager to hear what you have to say
about it. Even got SQL Server and Sybase installed to verify your claims
:)

Nicola

Derek Ignatius Asirvadem

unread,
Jul 27, 2021, 5:09:28 PM7/27/21
to
> On Tuesday, 27 July 2021 at 20:59:24 UTC+10, Daniel wrote:

> I noticed that the procedure 'Dumb_Withdraw_tr' actually takes @UpdatedDtm as a parameter

Ouch. Copy-paste error. Please ignore. Doc updated. Please discuss against the corrected doc.

What skew ???

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 27, 2021, 6:03:50 PM7/27/21
to
> On Tuesday, 27 July 2021 at 20:59:24 UTC+10, Daniel wrote:
> Hi Derek,
>
> Thanks for the response and continued discussion.

It is my pleasure.
>
> ---
> >We need to understand it forwards. His Transaction completed and it persists ... and therefore he has confidence in the system. XOR his Transaction completed, but it did not persist ... and therefore he has no confidence in the system. Transaction Sanity/p4.
>
> > Was the Transaction Durable or not ?
> No. Neither the User 1 or User 2 transactions make use of the timestamp based concurrency control.

Yes, that is the back end, or developer’s understanding.

What is in the user’s mind, how does he perceive it ?

> ---
> > Correct.
> > But that is not robust, it is a large overhead. A data-level TimeStamp (DATETIME datatype) is the most robust. We have had that since 1965, 1984 for RDBMS, forty years before MS implemented ROWVERSION.
> If you're using a clock-based timestamp such as DATETIME, instead of a server-wide monotonic integer (which is what the Microsoft ROWVERSION is), what do you do to handle the possibility of clock skew on the database server?

What skew ??? Where is this skew ???

> Do you do anything, or do you just take your chances? I recognise that the situation is extraordinarily rare, and you may typically dismiss it on that basis.

I can’t answer the “skew” part directly because I don’t know what you mean. Re allowing a skew (if there was one), definitely not, I would not do that (we cater for extreme high transaction rates; race conditions; etc in high end Stock Trading environments). The Template has not changed since 1993, it required minor tweaks to cater for the new (additional) Data Storage structure for RFS type files when Sybase ASE 12 came out (IIRC 1999).

Second question, why choose a monotonic number that the server has to generate and maintain ?

(
The original Sybase 4.2 Datatype was TIMESTAMP. It is not a monotonic number but similar. It is a metric, in essence it is the number of writes to the database. A database-wide currency number. It can be compared [the purpose] but it cannot be examined. ROWVERSION is the bastard son’s copy of his father’s method. ROWVERSION is a horrible name for it, it is just another magic number, that the server has to maintain, same as a GUID or UUID as used for RecordId.

TIMESTAMP is not deprecated, but it is ancient, no one I know uses it. Not becauee it does not work, but because it cannot be examined. A column eg. /TimeStamp/ or /UpdatedDtm/ (Datatype DATETIME, millisec resolution, no need for microsec resolution) is commonly used.
)

> ---
> > I am provoking a thought process in your mind. Focus on *BASIS*. Here you have to get rid of any MV-non-CC or “MVCC” notions because they are bankrupt. Think in terms of Reality, an online shared database that contains a single version of any single fact.

> So what you're getting at, especially with the mention of paper-based, is: It's inconceivable that you could inadvertently clobber someone else's change to that single piece of paper without being cognisant of that. After all, they've written on the paper and put it back, and then you've picked it up and looked at it. Obviously being a single piece of paper, without duplicates, access to the piece of paper involves queuing up for it and waiting your turn.

Yes. Excellent parallels drawn. Keep going.
- In the paper system, the person (first-up in the queue) would have be cognisant.
- think: a library card system, arranged in 200 little drawers for convenience of physical access by many people
- In the automated system, the program has to be cognisant.
- In the paper system, the person (first-up in the queue for a particular card) has an idea of its data currency based on something, so that when he returns (with the intended update), he would be cognisant that the card has changed.
- In the automated system, the program has to do that.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 27, 2021, 8:49:58 PM7/27/21
to
> On Tuesday, 27 July 2021 at 21:37:57 UTC+10, Nicola wrote:

> I am back, too,

Good to have you back.

> Derek, I have read your Transaction Sanity document. When academics
> discuss "transactions", they do so in the context of what eventually
> becomes a start transaction… commit block in implementations. For
> instance, they ask themselves: under what conditions are lost updates
> (and other anomalies)

I don’t accept that. They don’t even know about Lost Updates. There is nothing in the literature about it.

Further, /By their fruits ye shall know them/. For more than twenty years, the CS grads we hire have no clue about these issues, meaning that they are taught nothing about real world Transactions. BIg corporations pay big money to get their developers educated precisely because they are not educated in the “education” system, they are ignorant of CS even though their degree is CS.

> be avoided when:
>
> -- Transaction 1
> start transaction;
> -- Perform some operations, including updates
> commit;
>
> -- Transaction 2
> start transaction;
> -- Perform some operations, including updates
> commit;
>
> are executed concurrently?

There is not enough detail in that to say anything, one way or the other. It is just two generic code segments. Give it more detail and ask a specific question.

Starting from scratch is insanity.

> Admittedly, the examples that are usually
> found in textbooks are quite misleading: the classical fund transfer is
> used because the context is readily understood, but it should be taken
> with a grain of salt, and certainly not as a way to suggest how a real
> banking transaction should be implemented.

I don’t have a problem with that particular example. I have a problem that the textbooks give no further examples, that they do not get into the issues (contention; reduction; concurrency; etc). The textbooks have a consistent single-user mindset, promoting schizophrenia.

> It's naive to think that the
> "naive solution" in your document is what a (good) teacher would suggest
> as a practical way to solve the problem stated in that document.

I did not say that a teacher taught them that, no idea where you got that. I can’t defend what I did not say.

I said (you can take implications of what I said), that naïve developers do that. Precisely because they are clueless re the consequences, precisely because they are NOT taught anything about OLTP (technology in the real world since 1965, on SQL Platforms since 1984). The sequence is:
- they code “transactions” badly, with no template
- they deliver the system with minimal testing
- everything looks fine
- as more and more users start using the system, Transactions disappear; money is given to people who are barred from receiving that money; deadlocks start to happen; etc
- a vague notion that the system had concurrency problems is formed
- the developers scramble to fix the worst few “transactions” and the most obvious; the easiest, thing to do is BEGIN TRAN in the client
- it immediately fixes the lost data; lost “transaction” problems, so they have confidence in that particular band-aid
- but it introduces massive lock contention


> To make
> the point clearer (and simplifying things a bit): the transaction theory
> that we teach, and which you can find in (good) textbooks, is the theory
> you would need to write a transaction manager for a DBMS (i.e., the
> theory needed by Sybase developers),

1. For Sybase Engineers (your “developers”)
You have said some silly things in the ten years that you and I have conversed, but that is definitely the silliest. As per the evidence (not a claim; not an opinion), the academics (
a) maintain a steadfast denial of the real world; of the industry; of the facilities in commercial SQL Platforms,
(b) theorise about a contrived notion of the real world; a contrived notion of the industry; a contrived notion of the facilities in commercial SQL Platforms, aka FANTASY
(c) as evidenced in hundreds of papers that produce sweet fanny adams, that have zero effect on the real world; on the industry; on commercial SQL Platforms
(d) and they are going to tell Engineers of commercial SQL Platforms, who wrote the THEORY, and who still today write progressions to the theory (most are PhDs)
(e) about the commercial SQL Platforms that were established FORTY YEARS ago, and have FORTY YEARS of maturity, that are secured with scores of PATENTS (as well as proprietary methods)
(f) FORTY YEARS after the fact.

I love it. You can’t make this stuff up. If it was not in writing, no one would believe it.

The is the same cultivated insanity that academics who allege to be in this field have, and have had since 1970. Inventing the wheel from scratch requires the denial that the wheel was invented FIFTY YEARS ago. Theorising about “semantic models” FORTY YEARS after IDEF1X, in hysterical ignorance that the /RM? is based on FOPC; Predicates; that any suggested model has to be Logic; Mathematics; Semantic.

This is the same hilarious inferiority complex (presenting as “superiority complex”) that academics have. This is the same idiocy that Date; Darwen; Fagin; etc have, trying to say that they can teach anything, ANY THING, about the /RM/, or progress the /RM/.

Remember, Codd was not an academic, he was a scientist who worked for a platform supplier. The academics never accepted him, they have not produced a single progression of the /RM/, but they have produced hundreds of papers promoting 1960’s Record Filing Systems fraudulently labelled as “relational”.

2. Example
Ok, take it another way, provide one single example of a thing that academics have produced that “Sybase Engineers” need.

3. For wannabe ArrDeeBeeEmEss developers
Even that is stupid. Why on earth would anyone theorise about something that has been implemented and proved for FIFTY YEARS ?

>>>>
Here is my take on science, as it has been 350BC to 1911. (Which is under attack since 1911, by Modernism.) It is a body of knowledge (certainty: the Latin word means knowledge not speculation). Of course, one can speculate BASED ON that body of knowledge, and that is HYPOTHESIS, which when confirmed by tests is elevated to THEORY. If and when the theory is proved (proper proof, proper method, not merely a mathematical definition of a theory), it progresses to a truth, and is added to that body of knowledge.

When that theory, which is proved in one or the other vertical gets proved in many verticals, it is elevated to LAW, such as the LAW of Thermodynamics, and it applies to all other sciences. Eg. evolutionism fails the second LAW of Thermodynamics.
<<<

OLTP is LAW. The proof in the mainframe rendition was fixed in 1965. The proof for the minicomputer rendition and SQL Platforms was fixed in 1984 (Britton-Lee in the late 1970’s). I was benchmarking CINCOM/TOTAL NDBMS against Britton-Lee proprietary DBMS in 1979. (Britton-Lee became Sybase in the SQL rendition.) Nothing has changed.

Anyone theorising about OLTP after 1984 is in schizophrenic denial of the real world. The hallmark; the proudly declared badge, of academics in this field. They openly declare that implementation concerns are divorced from theory. Thus their theory is fantasy, about a problem that has been solved FIFTY YEARS ago.

Oh wait, now that they are pushing PissGress, which is an “implementation”, suddenly they have started theorising about implementations, contradicting their own declaration. But it is still divorced from reality, a contrived fantasy. Instead of finding out what Commercial SQL Platforms have doen for FORTY YEARS, they deny its existence, and contrive to invent the wheel, from scratch.

What is demanded, what is NOT taught, is education about the real world, education about OLTP LAW. Instead, you guys teach a MV-non-CC or “MVCC” mindset, and no OLTP at all.

Separate point. If you want to theorise about something that relates to Transaction Processing, that might illuminate the subject, study chip technology, particularly the advances in [software, live objects] THREADS. It is not a coincidence that Sybase ASE uses a single unix process for the server, and multiple software THREADS internally (fully configurable) ... that, wait for it ... handle multiple user THREADS, some of which are competing Transactions.

It is with the same level of stupid, and the same level of schizophrenia, that academics teach ERD for “relational” modelling, or theorise about “semantic models”, FIFTY YEARS after IDEF1X, FORTY YEARS after it was established as a Standard.

Put another way, just look at the stupefying “implementation” known as PoopDePoopGress, that evidences the level of theorising that the academics actually theorise about, that they are aware of.

--------------------------------
WAKE THE FROG UP
---------------------------------------------------------------
IT IS A BACKWARD PIECE OF PIG POOP
------------------------------------------------------------------------------------------------------------------------------------------------------------
IT DOES NOT HAVE EVEN THE BASICS THAT COMMERCIAL RDBMS HAVE HAD, FOR FIFTY YEARS
------------------------------------------------------------------------------------------------------------------------------------------------------------

You can’t make this stuff up. About two hundred programs competing like the asylum dwellers that wrote it, pretending to be a server, hosting multiple stale offline versions or records, with a determination to deny reality of Online; of Transactions; of Processing. That is the evidenced level of your theorising. People who have not lost their brains identify that as contrived fantasy.

And when it doesn’t work according to their fantasy, they redefine the established terms. That crosses the line from insanity into criminal insanity. Pure evil.

> rather than the theory needed to
> correctly implement the transactions for a banking system or any other
> application, which builds on top of the former and is seldom, if ever,
> discussed in textbooks.

Ok, so you have confirmed what I said.

> Hence, I am eager to hear what you have to say
> about it.

Be my guest, ask a specific question. Either follow and progress this thread, or any of the other that you started but have not completed.

As I have stated, I teach science; truth; knowledge, that does not change. I have no opinions or that subject matter. If you are a scientist, please stop demeaning the subject to mere opinion; personal claim. It is science; knowledge; certainty; permanent (truth is permanent).

Even got SQL Server and Sybase installed to verify your claims

On the one hand, great, cool, knock yourself out.

I won’t suggest that you can use it properly, because you have that evidenced academic backward mindset. I have seen quite a few academics make hopeless databases in Sybase, because they are clueless about the /RM/; SQL; and database design, and then cry that Sybase is broken. This is why I have declared, both on c_d_t and at TTM:

----------------------------------------------------------------------------------------------------------------------
IF THERE IS ANYTHING THAT YOU THINK THE /RM/ OR SQL CANNOT DO,
GIVE IT TO ME, AND I WILL GIVE YOU THE SOLUTION
----------------------------------------------------------------------------------------------------------------------

After about five sickening embarrassments for Darwen, the freaks at TTM stopped asking me.

----

On the other hand, why ? Any declaration that I make is supported by science. If you think something is merely a claim, just ask for confirmation. All the product manuals are online. There is no need to install a platform and generate a database for the purpose of checking a claim or opinion.

But again, back to the one hand, it is great that you have finally started to obtain knowledge about the real world, about what a commercial SQL Platform is. You are breaking through one aspect of academic denial of reality, that is to be commended.

No suggestion that you can learn about the project (Relational database design; contention reduction; OLTP; etc) from product manuals, but in order to use the product properly, I do suggest you download the full set of manuals (PDFs). Re this thread, read the P&T Locking manual.

You may find these docs useful:

____ https://www.softwaregems.com.au/Documents/Sybase%20GEM%20Documents/

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 27, 2021, 10:34:33 PM7/27/21
to
> On Wednesday, 28 July 2021 at 10:49:58 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Tuesday, 27 July 2021 at 21:37:57 UTC+10, Nicola wrote:
>
> >>>>
> Here is my take on science, as it has been 350BC to 1911. (Which is under attack since 1911, by Modernism.) It is a body of knowledge (certainty: the Latin word means knowledge not speculation). Of course, one can speculate BASED ON that body of knowledge, and that is HYPOTHESIS, which when confirmed by tests is elevated to THEORY. If and when the theory is proved (proper proof, proper method, not merely a mathematical definition of a theory), it progresses to a truth, and is added to that body of knowledge.
>
> When that theory, which is proved in one or the other vertical gets proved in many verticals, it is elevated to LAW, such as the LAW of Thermodynamics, and it applies to all other sciences. Eg. evolutionism fails the second LAW of Thermodynamics.
> <<<

The purpose of science is to determine the truth. (Truth pre-existed its notice by humans.)

Eg. in the old days, scientists were RESPONSIBLE humans, they dealt with issues within their science, and tore down absurd theories, they applies the Four Laws of Thought and resolved any unresolved issue (LEM), they rejected any contradiction (LNC). The result being, lay people did not have to argue about an issue. Due to their grounding in Reality, in Objective Truth, opinions and consensus and citations were irrelevant. One either accepted objective truth and remained sane, or one rejected it and embraced insanity. There is no argument to be had.

In these dark days, when the best of human filth is propagandised as “science”, the counter-point needs to be articulated.

With the implementation of Modernism in science, since 1911 (date according to my research thus far, Modernism as it destroyed Philosophy was late 18th century):
- Four Laws of Thought rejected
- Logic suppressed
- Composition & Causation (Cause & Effect) rejected
- the natural hierarchy rejected
- Atomicity rejected, fragments divorced from their context (the atom) and elevated
- limited to Materialism (the intellectual realm rejected)
- instead of ONE level of speculation BASED ON knowledge, speculation upon unproved speculation
--- speculation upon unproved speculation upon unproved speculation
----- speculation upon unproved speculation upon unproved speculation upon unproved speculation
- theories that keep changing (which in and of itself prove the theory false) and allowed to keep changing, in the fervent hope that if they change enough
--- Evolutionism is the most propagandised and thus the best example: it defies logic and science; not a single shred of proof in 170 years; it lies in denial of evidence that proves it false; and the theory keep changing
- introduction of relativism; subjectivism;
--- as infamously promoted by the worlds greatest plagiarist, alberto sow-sucking einstein
--- whose theories (other than the ones he stole) have all broken down, due to his plugging in false numbers that he “believed in”, instead of properly determined scientific numbers
- thus the focus has moved, the purpose has been corrupted,
- the purpose is now to PRODUCE the “truth”
--- which by definition means that it is not truth but fiction contrived by man
- it is pseudo-science; anti-science; Modern “science” because it contradicts established science
- mathematics (formalism; a language) promoted as a “science”, as a “language of the universe”
- reframing truth as “claims”; Straw Man arguments against Science
- since there is no objective truth, any and all issues can be argued, ad infinitum, without resolution
- citations and consensus has become the measure of probability
- totally irresponsible for adult humans
- nothing less than insanity

In this field, as evidenced, concerted suppression of the truth, the /RM/, and massive propaganda elevating 1960’s Record Filing Systems as “relational”. Anything the freaks did not invent (even FORTY YEARS after it was implemented by scientists) is ignored; denied, so that they can “invent” it from scratch.

Whereas Science is characterised by Objective Truth; Reality, Modern “science” is characterised by its performers, its propagandists. Here is a perfect example of a two performers, one a heavily marketed “scientist” in conversation with another performer. This is the same freaky idiot that promotes Newton’s Laws as precisely what it is not. The diminution of Science, the elevation of idiocy as “science”.

____ https://www.youtube.com/watch?v=VbUoaBJd2sA

Hence I reject it outright. As evidenced in my posts.

If requested, I can articulate that further, or provide further detail; evidence; etc. As if there is not enough evidence in these threads.

Cheers
Derek

Nicola

unread,
Jul 28, 2021, 3:55:29 AM7/28/21
to
On 2021-07-27, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> On Tuesday, 27 July 2021 at 20:59:24 UTC+10, Daniel wrote:
>> If you're using a clock-based timestamp such as DATETIME, instead of
>> a server-wide monotonic integer (which is what the Microsoft
>> ROWVERSION is), what do you do to handle the possibility of clock
>> skew on the database server?
>
> What skew ??? Where is this skew ???

How do you handle a situation in which the system's clock starts
reporting an inexact time? It may not happen frequently, but it does
happen (say, the NTP daemon crashes, or gets the wrong time for some
reason). Or, if you have two servers writing data into the database,
their clocks might be off, say, by a few seconds.

Nicola

Derek Ignatius Asirvadem

unread,
Jul 28, 2021, 6:36:12 AM7/28/21
to
> On Wednesday, 28 July 2021 at 17:55:29 UTC+10, Nicola wrote:
> > On 2021-07-27, Derek Ignatius Asirvadem Derek wrote:
> >> On Tuesday, 27 July 2021 at 20:59:24 UTC+10, Daniel wrote:

Nicola

I will change the sequence of your questions, in order to provide a normalised answer.

> >> If you're using a clock-based timestamp such as DATETIME, instead of
> >> a server-wide monotonic integer (which is what the Microsoft
> >> ROWVERSION is), what do you do to handle the possibility of clock
> >> skew on the database server?
> >
> > What skew ??? Where is this skew ???
>
> How do you handle a situation in which the system's clock starts
> reporting an inexact time?

I do not understand why this is relevant, or why an SQL developer (using a database, plus a good OLTP strategy, even if not perfect or standard-compliant) should concern themselves about it.

We are setting the row TimeStamp (Datatype DATETIME, millisec resolution, not microsec) from the SQL Server via GETDATE().

Whatever time problems occur, they occur in the server, and it has to be dealt with in the server. There is nothing that the SQL developer can do, either beforehand (I expect you think this somehow affects the Transaction Template), or after.

If you are that concerned about it, use a VersionNo (Datatype INT) instead. Set it to 1 on INSERT, set it to VersionNo + 1 on UPDATE.

----
> Or, if you have two servers writing data into the database,
> their clocks might be off, say, by a few seconds.

1. Those are issues for whoever writes the server to figure out, and to secure. Again, it has nothing to do with the database or the SQL developer, they can’t do anything about it.

2. Unless you are abusing the term “server”, there is no such thing as “two servers writing data to the database”.

3. In mickey mouse freeware (and Oracle) which are not servers by any stretch of the imagination (ok, ok, academics have more stretched imagination capability than normal folk), there is no server, instead they have hundreds of programs running as a herd (inspect the following link), on a machine that is treated as a server (which needs to be 100 to 1,000 times bigger and more powerful than a machine that is used for a commercial SQL Platform). The use of the term “server” is false, fraudulent, disgusting.

Yeah, sure, in mickey mouse land, the “server” is not a server, it is not SQL compliant, and it does not provide any of the consistency or security (here we mean execution and processing consistency and security) that is provided by commercial SQL Platforms. You will be writing a hell of a lot more code than in SQL, and you will have to take care of things manually (such as multiple manual lock strategies, and now this “time skew”), that are done automatically in SQL.

4. It is very sad, that such burdens, which are the domain of the “server” developer, are foisted onto the SQL developer. But it must be noted that that is the hallmark of the insane: they are not responsible (by definition, the premise of the Mental Health Act), and they impose their problems on normal humans, who can be responsible.

5. Any code you write for a PooPooPooGross anti-server, 100% of it is not SQL, it cannot be ported to an SQL Platform. And when you move to one, all that low-level code will be eliminated.

You are best advised to stay away from such filth. You get what you pay for.

Link
Got to this page:
__ https://www.softwaregems.com.au/Documents/Sybase%20GEM%20Documents/
Read the section on *Database Server Architecture*
Download the *Oracle & Freeware vs Sybase ASE* PDF.
Just four pages with pretty pictures.

----
> It may not happen frequently, but it does
> happen (say, the NTP daemon crashes, or gets the wrong time for some
> reason).

Well, if you are using NTP, that means you have “distributed” mickey mouse processing (as distinct from genuine distributed processing). For the purpose of distributing the task or load, and it again is fraudulently promoted as “parallel processing”. Nothing could be further than the truth, that is not parallel processing. That is just using a [machine] server farm, instead of a single [machine] server, because (a) your mickey mouse program herd can’t handle the query on a single [machine], and (b) the [machine] server cannot be upgraded any further.

Doing that for a database server (real or mickey mouse) is hysterically stupid (it is fine for number crunching and bitcoin mining). It is not architecture, but mere resource addition. And then you CREATE a whole mess of new problems that a database server does not have: synchronisation; shared memory objects (oops, you don’t have even the notion of shared memory); distributed lock contention; massive contention on the single transaction log file ... the list is endless.

So they take their non-architecture to a “whole new level”, non-architecture squared.

Even if the freaks did all that stupid stuff, it should be done such that the SQL developer and DBA have nothing to do, nothing to concern themselves with. It is a program herd problem that only the program herd can deal with.

Again, foisting that burden onto the developer is criminal insanity, completely irresponsible. But aaaah, the academics think that that is normal.

You are best advised to stay away from such filth. You get what you pay for.

In case you do not understand, Sybase; MS; DB2 are genuine servers, with a server architecture, offering genuine parallel processing, using chip threads for load distribution. They execute as a SINGLE PROCESS on unix. There is no distributed processing ala multiple machine, because doing that for a database server is hysterically stupid, and there is no problem at all executing on a tiny machine, which can be upgraded as required (100 to 1,000 times smaller than that demanded by PusGres or Orable). (Sybase does have a Cluster Edition, but that has a different purpose, not relevant here).

>>>>
Hell, I run several Sybase servers that host thousands of client connections, with just 12 chip threads. And I am definitely not the only one.

I am getting the consistent impression that you do not bother to read the links I give you. I do not ask you to read anything that is not completely relevant. I gave you this link a couple of times.
__ https://www.softwaregems.com.au/Documents/Article/Sysmon%20Processor/Sysmon%20Processor%20Eg%20Date%20Delta.pdf
Read it. Examine it. Ask questions. Just the short *Selection* section at the top. On this machine, we hosted hundreds of connections, on just four engines (four unix processes [that was 2013, using Process Architecture, before the advent of Thread Architecture] ). Using:
- Frame 56%
- Unix/CPU User 19%
- Sybase (four processes) 20%
--- That is, Sybase used 20% of 19% machine power.
<<<<

Gear up on Sybase ASE and find out what a real server is; what real parallelism is. You can spend all the time you save (more than half of what you will spend on a PeePooPeeGress implementation) doing something useful, instead of struggling with low-level issues that is not your problem.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 1, 2021, 10:01:39 PM8/1/21
to
Guys and dolls

Can we please get some progress here. We have not finished, and the Template is not complete.

Recap for the exact position that we are at.

> On Wednesday, 28 July 2021 at 08:03:50 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Tuesday, 27 July 2021 at 20:59:24 UTC+10, Daniel wrote:
> >
> > >We need to understand it forwards. His Transaction completed and it persists ... and therefore he has confidence in the system. XOR his Transaction completed, but it did not persist ... and therefore he has no confidence in the system. Transaction Sanity/p4.
> >
> > > Was the Transaction Durable or not ?
> > No. Neither the User 1 or User 2 transactions make use of the timestamp based concurrency control.
> Yes, that is the back end, or developer’s understanding.
>
> What is in the user’s mind, how does he perceive it ?

What is in the user’s mind, how does he perceive it ?

Pages 4 and 5 in the Transaction Sanity doc.

> > > I am provoking a thought process in your mind. Focus on *BASIS*. Here you have to get rid of any MV-non-CC or “MVCC” notions because they are bankrupt. Think in terms of Reality, an online shared database that contains a single version of any single fact.
>
> > So what you're getting at, especially with the mention of paper-based, is: It's inconceivable that you could inadvertently clobber someone else's change to that single piece of paper without being cognisant of that. After all, they've written on the paper and put it back, and then you've picked it up and looked at it. Obviously being a single piece of paper, without duplicates, access to the piece of paper involves queuing up for it and waiting your turn.
> Yes. Excellent parallels drawn. Keep going.
> - In the paper system, the person (first-up in the queue) would have be cognisant.
> - think: a library card system, arranged in 200 little drawers for convenience of physical access by many people
> - In the automated system, the program has to be cognisant.
> - In the paper system, the person (first-up in the queue for a particular card) has an idea of its data currency based on something, so that when he returns (with the intended update), he would be cognisant that the card has changed.
> - In the automated system, the program has to do that.

What is the *BASIS* for confidence that an update can be performed on a row, or library card ?

I teach this stuff, I cannot give you the Lecture Notes or the SG OLTP Template, but I have given you a Transaction Sanity doc, so that at least this thread and the issues herein can be fully understood, that it can progress to closure; the Template can be completed. You need to have an illumination, which means progressing your own thinking, not merely being told. Which is why I am provoking thought.

When you have that illumination, you will (a) realise the principle of an OLTP system, and (b) therefore, how false and incorrect the notion of multiple offline versions is.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 1, 2021, 10:19:13 PM8/1/21
to
Nicola

> On Tuesday, 27 July 2021 at 21:37:57 UTC+10, Nicola wrote:
>
> Hence, I am eager to hear what you have to say
> about it.

Please feel free. Ask a specific question.

> Even got SQL Server and Sybase installed

Well, I am certainly not going to dissuade you from learning about the real world; about commercial SQL Platforms; what SQL really is; what ACID Transactions mean. Good on you, for taking one more step out of the academic isolation from the all that. On the contrary, I encourage you, as the first academic who is trying to close the chasm, of FORTY YEARS of isolation from the real world.

But that will take six, maybe twelve months. And that will be severely limited by the academic mindset, that you think you know SQL; ACID; commercial platforms; etc, which is pure perversity, as I have detailed and evidenced in many posts. So please learn those subjects with fresh mind, with an accurate disposition: that you do not know those subjects. Otherwise it would be like a prostitute trying to learn about chastity.

> ... to verify your claims
> :)

Well, if you enumerate even one or two of those declaration (that you re-frame as "claims"), which are not diminished by your re-framing, I can answer them at short notice, and refer you to manuals, etc. No need to wait six months, no need to download software and gear it up yourself. Please name the declarations that you have trouble accepting.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 2, 2021, 1:04:54 AM8/2/21
to
Nicola

> On Monday, 2 August 2021 at 12:19:13 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Tuesday, 27 July 2021 at 21:37:57 UTC+10, Nicola wrote:
>
> > ... to verify your claims
> > :)
>
> Well, if you enumerate even one or two of those declaration (that you re-frame as "claims")

Yes, of course that is dishonest. In case it needs to be said, I am not saying that you are consciously being dishonest. I am saying, that is standard academic practice, that you are schooled in. The classic method they use: re-frame a declaration as something that it is not, and then either burn it, a Straw Man of your own creation, or at the least diminish it.

Further, you take declarations and absolute statements very lightly, as re-framed "claims". Even your own. You do not have the self-respect, or respect for the science, that is required to treat them with respect, to obtain resolution. You do not correct your declarations (I won't call them "claims") when they are proved false. Eg. the Movie Title thread. Eg. the paper you wrote declaring that it could *NOT* be modelled using the /Relational Model/, that it could only be done by resorting to physical Record Ids ala the anti-Relational RM/T.

Cheers
Derek

Daniel Loth

unread,
Aug 2, 2021, 8:32:27 AM8/2/21
to
Hi Derek,

---

> Concerning clock skew

Nicola's reply about clock skew covers what I was getting at. Specifically, a clock moving backwards or forwards. Moving backwards means there is potential for you to see the same datetime twice.

You don't see it happen often, but every now and then it does. Oftentimes things like authentication services stop working because they're heavily reliant on relatively precise time keeping.

This is why I personally lean towards the SQL Server rowversion type mentioned.

Rowversion is notably just a synonym for 'timestamp' as you've described Sybase 4.2 as having. Another Microsoft-ism. In fact, you can still use the 'timestamp' data type when writing the table DDL and that'll work fine, though the 'timestamp' keyword is technically deprecated.

I think the deprecation is because Microsoft's 'timestamp' is not 'timestamp' in the ISO standard sense. In SQL Server, a non-null rowversion is equivalent to binary(8). A nullable rowversion is equivalent to varbinary(8).
To make sense of the number expressed as binary(8), you can cast it to a bigint. At that point it becomes useful for concurrency control in much the same way that UpdatedDtm is.

---

> Concerning durability and user perception

In the user's mind, you are right that their perception of the system is that it has not lived up to the guarantees of the ACID concept of durability. They won't recognise that it was written to disk for a fleeting moment before their changes were clobbered by a subsequent transaction.
They simply see the final outcome, which is that a subsequent user's actions have overwritten their own.

You also mentioned user training teaching users that they would expect to see their changes after performing a transaction, and teaching users that the durability property means that they can be confident that a reportedly committed transaction indicates that the changes made in their transaction will stick.

In the library card system, the basis for their confidence that they, and they alone, can update the physical card and return it to the drawer. And it's just one card, so their changes will be there when the next person takes it from the drawer.
And this is essentially what the lock management system does. It affords that mutually exclusive access to the library card.

Is that the basis you are asking for?

---

> Lost currency on page 5 in the Transaction Sanity document

I read this. Basically, the problem is that I load a screen with a number of fields on it. I grab a drink. I come back 10 minutes later, make a minor change, and click 'Save'. In doing so I've just wiped out the work of a colleague who made a lot of changes during my 10 minute absence because what happened to be on my screen was the old, not-current, data when I clicked 'Save'.

It needn't be such a protracted period of time of course. The same could happen in mere seconds.

Lost update protection prevents this.

---

> The template being unfinished

Is it unfinished due to the lack of lost update protection? Or is there something else missing?
As far as I can see, it's already doing a good job in terms of contention management.

I've posted a new GitHub Gist here: https://gist.github.com/DanielLoth/0599c2475368083acc9032d34f0919e1
This revision of the code contains lost update protection using the UpdatedDtm column in the Person table.

---

Cheers,
Dan

Nicola

unread,
Aug 2, 2021, 6:28:49 PM8/2/21
to
On 2021-08-02, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
> Can we please get some progress here. We have not finished, and the
> Template is not complete.
>
> Recap for the exact position that we are at.
>
>> On Wednesday, 28 July 2021 at 08:03:50 UTC+10, Derek Ignatius Asirvadem wrote:
>> > On Tuesday, 27 July 2021 at 20:59:24 UTC+10, Daniel wrote:
>> >
>> > >We need to understand it forwards. His Transaction completed and
>> > >it persists ... and therefore he has confidence in the system. XOR
>> > >his Transaction completed, but it did not persist ... and
>> > >therefore he has no confidence in the system. Transaction
>> > >Sanity/p4.
>> >
>> > > Was the Transaction Durable or not ?
>> > No. Neither the User 1 or User 2 transactions make use of the
>> > timestamp based concurrency control.
>> Yes, that is the back end, or developer’s understanding.
>>
>> What is in the user’s mind, how does he perceive it ?

> Pages 4 and 5 in the Transaction Sanity doc.

I'd say that there are two Logical Units of Work (LUW—I think the term
was used in CICS), one by User 1 and one by User 2. When updating
a value, each user will assume that the value they have replaced is the
one they had previously read (if the system lets the update go through).

Each LUW consists of more than one database transactions. I refer to
each SELECT statement in that example as a (read-only, database)
transaction. Hence, some state must be maintained between database
transactions, to make each LUW meet the user's expectations. In this
case, the state is the pair (@CreditRatingCode,@BirthPlace), i.e., the
information initially retrieved by each user. Then, both TweedleDumb and
TweedleDumber should check that such values have not been changed since
the corresponding previous SELECT, and do so before updating the record.

Rather than checking the values directly, a SELECT could also read
a version number of the retrieved record and pass it to TweedleDumb
(resp., TweedleDumber), which would check that the version number still
matches (by re-reading it), in which case it would update the record and
increase the version number.

Nicola

Nicola

unread,
Aug 3, 2021, 3:58:56 AM8/3/21
to
Daniel summarized Derek's approach as follows:

>Derek in particular described his approach and in doing so expressed
>two principles:
>
>1. Only acquire exclusive (write) locks if the transaction will
>potentially succeed.
>
>2. Never acquire exclusive locks if embarking on the transaction is
>futile (or put another way, don't start what you absolutely cannot
>finish).
>
>Consistent with those principles, the ideal was described as:
>
>1. A validate block - Where we 'look before we leap' (to borrow that
>expression). In this block we are reading only. We can use the READ
>COMMITTED isolation level, thereby acquiring and releasing shared locks
>in quick succession. No shared locks are held after the conclusion of
>the statement (i.e., the select query in this case).
>
>2. A transaction block - Where we check again and, if the conditions
>are still right (i.e., other users might have changed the data), we can
>proceed to do our work and then commit the transaction. In this block
>we acquire write locks, and these locks are held for the remainder of
>the transaction (until we commit or rollback).

Exercise 18.11 (and its solution) from Silberschatz's "Database System
Concepts" seems relevant:

https://www.db-book.com/db7/Practice-Exercises/PDF-practice-solu-dir/18.pdf

Nicola

Derek Ignatius Asirvadem

unread,
Aug 3, 2021, 6:26:17 AM8/3/21
to
Dan

Thanks for your response.

> On Monday, 2 August 2021 at 22:32:27 UTC+10, Daniel Loth wrote:
>
> > Concerning clock skew
>
> Nicola's reply about clock skew covers what I was getting at. Specifically, a clock moving backwards or forwards. Moving backwards means there is potential for you to see the same datetime twice.
>
> You don't see it happen often, but every now and then it does. Oftentimes things like authentication services stop working because they're heavily reliant on relatively precise time keeping.
>
> This is why I personally lean towards the SQL Server rowversion type mentioned.
>
> Rowversion is notably just a synonym for 'timestamp' as you've described Sybase 4.2 as having. Another Microsoft-ism. In fact, you can still use the 'timestamp' data type when writing the table DDL and that'll work fine, though the 'timestamp' keyword is technically deprecated.
>
> I think the deprecation is because Microsoft's 'timestamp' is not 'timestamp' in the ISO standard sense. In SQL Server, a non-null rowversion is equivalent to binary(8). A nullable rowversion is equivalent to varbinary(8).
> To make sense of the number expressed as binary(8), you can cast it to a bigint. At that point it becomes useful for concurrency control in much the same way that UpdatedDtm is.

Yes.

TIMESTAMP and ROWVERSION is the database timestamp from the internal DBTABLE. It is the no of writes to the database, asequential number that already exists (and thus does not have to be created and maintaiined ... maintenance of such a number [eg. record ID] is a significant overhead and contention issue, to be avoided).

For inspection, BINARY() obtains nothing. (BINARY “works”, in the sense that it allows inspection of any column.)

BIGINT is correct, and it is readable, SELECT-able. There you can see the incrementing [database-wide] value.

If there are no NULLs in the database, you will not see VAR*anything, all columns will be fixed-length.

The way Sybase and MS store a NULL is, by declaring the length of the column to be zero. In order to get that length stored, it has to be a VAR column (fixed length columns do not need such a definition). That adds 4 bytes to the column [position & length].

Clock skew is not an issue, because it is in the server, and we set the TimeStamp from the server GETDATE(). There never is a difference between the Time in the client vs the Time in the server, we work on server Time only.

If clock skew affects the server, then fix the server. We do not have to take responsibility in the database or app for a server fault.

If you have an el cheapo server that suffers such things, use a VersionNo instead. That is an actual row version number. (MS ROWVERSION is a false label.) Set it to 1 on INSERT, increment it on UPDATE.

> ---
>
> > Concerning durability and user perception
>
> In the user's mind, you are right that their perception of the system is that it has not lived up to the guarantees of the ACID concept of durability. They won't recognise that it was written to disk for a fleeting moment before their changes were clobbered by a subsequent transaction.
> They simply see the final outcome, which is that a subsequent user's actions have overwritten their own.

So they do know that their Transaction was not [D]urable. (Unlike academics and novice developers, I have never had to explain to a user that [D]urable does not mean durable but some private definition.)

> You also mentioned user training teaching users that they would expect to see their changes after performing a transaction, and teaching users that the durability property means that they can be confident that a reportedly committed transaction indicates that the changes made in their transaction will stick.

Yes, that is what [D]urable in ACID means. Due to the pig poop that academics write about, 95% of the people out there think that [D]urable is delivered by the server or herd of programs, they do not understand that it applies to Transaction code, they know nothing of Lost Update, or of Optimistic Locking. As evidenced here.

> In the library card system, the basis for their confidence that they, and they alone, can update the physical card and return it to the drawer. And it's just one card, so their changes will be there when the next person takes it from the drawer.
> And this is essentially what the lock management system does. It affords that mutually exclusive access to the library card.

[We are using the library card example to understand a paper-based system.)

Now that is a rigid library card system, there is only one version of each card, and it is manually removed from the card drawer for the duration of the change. Which means, other users queue up, waiting for the card to be returned, before they can use it. No lock manager is required. It becomes inadequate when the number of users contend over a fixed set of cards, the queues grow long for popular books (cards).

That is not what I meant. The advanced use of the library card system allows users to read the cards without removing them (eg. answering a call “do you have /Dr Who/ in the library ?”), and only remove the card when they need to update it (lend the book). So they inspect a card; record the TimeStamp that is on the card; go back to their other work. If and when they need to update the card
- they try to fetch it
- If the TimeStamp has changed, they know the info that they used is out-of-date, and they have to grab the new info from the card.
This allows substantially more users to contend over the same set of library cards, ie. high concurrency.

> Is that the basis you are asking for?

No.

You have all the mechanics, unfortunately from my 11-year-old TTM post, but not the illumination. I won’t labour it any longer.

The basis is the TimeStamp on the card, on the row in the database. The basis for any action:
- inspection followed by some work,
- or a Transaction,
- physical in the library card example, or logical in a row in the database
is the currency of the card or row. The TimeStamp.

The Lost Update happens because they did not observe the TimeStamp, and did not check before the update.

The Lost Currency happens because they did not observe the TimeStamp, and did not check before the update.

The Naïve Solution
- fixes the problem without understanding
- but ensures the system is high contention; low concurrency; lock-bound



> ---
>
> > Lost currency on page 5 in the Transaction Sanity document
>
> I read this. Basically, the problem is that I load a screen with a number of fields on it. I grab a drink. I come back 10 minutes later, make a minor change, and click 'Save'. In doing so I've just wiped out the work of a colleague who made a lot of changes during my 10 minute absence because what happened to be on my screen was the old, not-current, data when I clicked 'Save'.

At the point at which you hit ‘Save’, the *BASIS* for confidence that your Transaction would succeed was ???

Answer: it is dependent on the Currency of the data that you previously retrieved. Which currency was lost the moment you retrieved it. Because it is an online shared database. That is used by other online users. Who are updating the online shared database, all the time.

The key point to be understood here (the illumination) is the currency of the data. That it changes. That the currency is IN the data. That the currency must be stored as a TimeStamp in the row.

----

Now if you understand this deeply, you will understand that the notion of grabbing a row, AS A VERSION, and holding it, in your safe space, is hysterically stupid. Because the moment you grab the row, the version you hold is obsolete. It is a private offline version. It breaks the first principle of an online shared database, and [schizophrenically, in denial of reality] pretends that the obsolete version is something that can be relied upon.

Now you have learned precisely why, that it is not the version, or TimeStamp on the version (when the version was grabbed), that is relevant, but the TimeStamp on the row when the version was grabbed. The former is yet again, subjective, self-centred stupidity, AND in denial of the fact that the database is objective truth, a single version of the truth. The latter is in formal recognition that the database is the single version of the truth AND that it is shared, AND that the sharing is sociable, affording high concurrency.

Which is why “MVCC” can never work.

In order to make it work at all, they have to recognise that Stonebraker lied; lied; lied, their mantra is false; false; false, there is no Concurrency Control in “MVCC”, it is MV-non-CC, and they have to add a Lock Manager for Concurrency Control. Orable; MySql; PissGriss, all have one. MySQL is the best, because it affords logical locks that do not interfere with its Lock Manager, PissGriss is the worst because it is a horrible implementation (10,000 cultists spread across the planet) and it allows user-level lock in its Lock Managler, thereby guaranteeing conflicts; deadly embraces; and more lock contention, the precise thing that it alleges to solve. The level of dishonesty is criminal.

Further, MV-non-CC incurs a huge overhead, that of maintaining all those multiple fantasy versions, which it does all across the entire storage range of pages.

Anyone with half a brain would realise that since a Lock Manager fixes the problem, and it is essential, gee whiz, we can eliminate the MV-non-CC fantasy altogether. But no, that would mean the academics have half a brain, they would have to admit the MV-non-CC is an abject failure, both in principle and in implementation. Instead, they double down on the insanity that has been proved insanity, and they go for more locking, more layers of locking, more types of locks (you gotta love their darling “predicate locks”). They hope and pray to their deities, that their mountain of pig poop that has not worked in FORTY YEARS, will somehow work if they sacrifice their children to it.

Nicola says he “loves his ‘2PL’ Lock Mangler”, without acknowledging the fact that it is the only way to make the fantasy work, that it eliminates MV-non-CC.

> ---
>
> > The template being unfinished
>
> Is it unfinished due to the lack of lost update protection? Or is there something else missing?
> As far as I can see, it's already doing a good job in terms of contention management.
>
> I've posted a new GitHub Gist here: https://gist.github.com/DanielLoth/0599c2475368083acc9032d34f0919e1
> This revision of the code contains lost update protection using the UpdatedDtm column in the Person table.

Yes. That revision, not the previous, is correct, for the example tables given, and your decision as to where in the hierarchy you locate the UpdatedDtm.

The only note I have is this. To upgrade it to a generic Template, for issuing to all your developers.
- place the UpdatedDtm in Organisation as well
- test that in the Validate block (and in the client)
- such that you lock or block at the highest level in the hierarchy, thereby reducing the length of the lock chain, for all active Transactions

------------------------------------
-- Optimistic Locking --
------------------------------------

That is the simple and elegant form of providing genuine OLTP; low contention; high concurrency; full ACID in Transactions; zero deadlocks. Known as such since 1965 for pre-Relational systems, since 1984 for commercial SQL platforms.

Nothing else can legitimately be called Optimistic Locking. Note that the freeware and academic mindset have hilarious notions that they label “optimistic locking”, which is fraudulent. As well as “pessimistic locking”, which they say is what apps using Ordinary Lock manager have to do, which is false, and in hysterical denial of the evidenced facts, that it is they themselves who do it.

Not possible in the freeware.

Not possible in the academic mindset (they do not teach it, they teach insanity).


Transaction Sanity doc updated. It is now somplete, unless there is further discussion.

__ https://www.softwaregems.com.au/Documents/Article/Database/Transaction/Transaction%20Sanity.pdf

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 3, 2021, 7:29:33 AM8/3/21
to
> On Tuesday, 3 August 2021 at 08:28:49 UTC+10, Nicola wrote:
> > On 2021-08-02, Derek Ignatius Asirvadem wrote:
> > Can we please get some progress here. We have not finished, and the
> > Template is not complete.
> >
> > Recap for the exact position that we are at.
> >
> >> On Wednesday, 28 July 2021 at 08:03:50 UTC+10, Derek Ignatius Asirvadem wrote:
> >> > On Tuesday, 27 July 2021 at 20:59:24 UTC+10, Daniel wrote:
> >> >
> >> > >We need to understand it forwards. His Transaction completed and
> >> > >it persists ... and therefore he has confidence in the system. XOR
> >> > >his Transaction completed, but it did not persist ... and
> >> > >therefore he has no confidence in the system. Transaction
> >> > >Sanity/p4.
> >> >
> >> > > Was the Transaction Durable or not ?
> >> > No. Neither the User 1 or User 2 transactions make use of the
> >> > timestamp based concurrency control.
> >> Yes, that is the back end, or developer’s understanding.
> >>
> >> What is in the user’s mind, how does he perceive it ?
> > Pages 4 and 5 in the Transaction Sanity doc.
> I'd say that there are two Logical Units of Work (LUW—I think the term
> was used in CICS), one by User 1 and one by User 2. When updating
> a value, each user will assume that the value they have replaced is the
> one they had previously read (if the system lets the update go through).
>
> Each LUW consists of more than one database transactions. I refer to
> each SELECT statement in that example as a (read-only, database)
> transaction.

No. That is not a Transaction. You are still holding onto an incorrect notion that non-transactions are somehow Transactions. That contradicts (a) reality, (b) ACID, (c) SQL ACID compliance. If you keep calling a chichuahua a tiger, you will remain confused. Yes, I know, that is the PissGress mindest, their totally incorrect way of handling their offline versions; their additional 2PL locking, and for that they have redefined "transaction" ... they are always in some hysterically stupid "transaction" mode. It is false.

> Hence, some state must be maintained between database
> transactions, to make each LUW meet the user's expectations. In this
> case, the state is the pair (@CreditRatingCode,@BirthPlace), i.e., the
> information initially retrieved by each user. Then, both TweedleDumb and
> TweedleDumber should check that such values have not been changed since
> the corresponding previous SELECT, and do so before updating the record.
>
> Rather than checking the values directly, a SELECT could also read
> a version number of the retrieved record and pass it to TweedleDumb
> (resp., TweedleDumber), which would check that the version number still
> matches (by re-reading it), in which case it would update the record and
> increase the version number.

Yes.

One correction. LUW was the original CICS term, and it is still used in IBM/DB2. But it means Transaction, not PoopDePoopGres "transaction", not whatever you reframe it to mean. LUW means a single database Transaction, controlled according to ACID.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 3, 2021, 7:29:46 AM8/3/21
to
Nicola

Thanks for yours.

> On Tuesday, 3 August 2021 at 17:58:56 UTC+10, Nicola wrote:
>
> >Derek in particular described his approach and in doing so expressed
> >two principles:
> >
> >1. Only acquire exclusive (write) locks if the transaction will
> >potentially succeed.
> >
> >2. Never acquire exclusive locks if embarking on the transaction is
> >futile (or put another way, don't start what you absolutely cannot
> >finish).

Thanks, I appreciate credit as much as the next person, but I can't when it is not due. The entire method was established by IBM in the 360 CICS/TPC systems (Codd's employer). In 1979 I merely provided a rendition for our Cincom/TOTAL NDBMS customers. In 1993 I merely provided a rendition in SQL. I will take credit for that.

Other high-end providers have similar methods, possibly not quite as high concurrency as mine. I know no one else who guarantees high performance Transaction scalability and zero deadlocks.

> >Consistent with those principles, the ideal

It is not merely an ideal, it is a Formal Method, proved in millions of systems in the real world.
Established 1965.
Commonly used in OLTP systems: HDBMS; NDBMS.
Provided in SQL platforms since 1984.
Provided by high end database suppliers since 1993 (that I know of).

> was described as:
> >
> >1. A validate block - Where we 'look before we leap' (to borrow that
> >expression). In this block we are reading only. We can use the READ
> >COMMITTED isolation level, thereby acquiring and releasing shared locks
> >in quick succession. No shared locks are held after the conclusion of
> >the statement (i.e., the select query in this case).

In Sybase; MS; DB2, shared locks are held only during the READ operation (pertaining to a page or row), and released. Not the Statement.

> >2. A transaction block - Where we check again and, if the conditions
> >are still right (i.e., other users might have changed the data),

We check Data Currency for the Transaction (the first time, not "again").

> we can
> >proceed to do our work and then commit the transaction. In this block
> >we acquire write locks, and these locks are held for the remainder of
> >the transaction (until we commit or rollback).

Yes.

Additionally, the precise notions of [A]tomicity, [C]onsistency, and [D]urability, as it applies Transactions.

> Exercise 18.11 (and its solution) from Silberschatz's "Database System
> Concepts" seems relevant:
>
> https://www.db-book.com/db7/Practice-Exercises/PDF-practice-solu-dir/18.pdf

No.

They are stuck in the usual academic mindset, the examples are particularly about 2PL, which is an MV-non-CC artefact, not relevant to, or used in SQL OLTP Platforms, because we do not have 2PL. They do not understand or reference the "1PL" Lock Manager we have in the real world.

We definitely do not "re-execute" a Transaction, the Validate block is not a [paraphrased] "first execution without holding locks". The notion of re-executing a Transaction [note the compute-intensive !] is stupid. The Validate block is a separate formal article.

The "tree protocol" is in the Transaction, not in the server. They incorrectly worry about "acquiring locks that are not needed", blissfully ignorant of the fact that they are needed precisely for the "tree protocol", in order to (a) block at the highest level, and thus engage the fewest locks, and (b) the idiots do not realise that if they do not acquire the required locks, they are open to, they have created, a deadlock.

There are further errors, lack of precision, throughout. I won't enumerate it here. Eg. there is no wait for lock acquisition [compared with disk wait], it is practically instantaneous (all articles are in memory; nothing is on disk). There is a Lock Wait, which is a different thing, the server puts the task to sleep until the lock is available, and then wakes it up.

The usual superficial treatment of implementation concerns by academics. Granted, it is better the the Date; Darwen; Fagin pig poop, but it is in the same category, it just smells less offensive. Why don't they use this textbook across all universities, instead of the filth they do use, such as the "Alice Book".

Cheers
Derek

Nicola

unread,
Aug 3, 2021, 4:05:09 PM8/3/21
to
On 2021-08-03, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> On Tuesday, 3 August 2021 at 08:28:49 UTC+10, Nicola wrote:
>> I refer to each SELECT statement in that example as a (read-only,
>> database) transaction.
>
> No. That is not a Transaction. You are still holding onto an
> incorrect notion that non-transactions are somehow Transactions.

Ok, let me understand: what are the differences in behaviour between

select … ;

and

start transaction;
select … ;
commit;

?

> One correction. LUW was the original CICS term, and it is still used
> in IBM/DB2. But it means Transaction, not PoopDePoopGres
> "transaction", not whatever you reframe it to mean. LUW means
> a single database Transaction, controlled according to ACID.

So, wrt to your Transaction Sanity example (latest revision), User
1 executes one LUW/Transaction, which starts at time T3. And User
2 executes one LUW/Transaction starting at time T4. Besides, the SELECTs
at T1 and T2, respectively, are not part of those Transactions. Is that
right?

If you call the whole stored procedure a Transaction, how do you refer
to each BEGIN TRAN… COMMIT block?

Nicola

Derek Ignatius Asirvadem

unread,
Aug 3, 2021, 8:15:03 PM8/3/21
to
> On Wednesday, 4 August 2021 at 06:05:09 UTC+10, Nicola wrote:
> > On 2021-08-03, Derek Ignatius Asirvadem wrote:
> >> On Tuesday, 3 August 2021 at 08:28:49 UTC+10, Nicola wrote:
>
> >> I refer to each SELECT statement in that example as a (read-only,
> >> database) transaction.
> >
> > No. That is not a Transaction. You are still holding onto an
> > incorrect notion that non-transactions are somehow Transactions.
>
> Ok, let me understand: what are the differences in behaviour between
>
> select … ;
>
> and
>
> start transaction;
> select … ;
> commit;

It is possible that you are confused by the SQL syntax required. Or for the reason I mentioned, that is the freaky fantasy land of PissGriss, which is always in a “transaction” mode, wherein Transaction has been redefined.

The purpose of language and definition of terms is to communicate a concept in one word, instead of an explanatory sentence, each time. Things get very confused when people use a word but mean quite different things. Which is the standard practice of academics in this field: they use private definitions and re-definitions and re-framing. All disgustingly dishonest, and a demonstration, ready evidence, that they cannot communicate with the real world that they claim to be theorising about.

Thus I will use only real world terms, and established SQL terms. As they apply to genuine SQL platforms, noting that the freeware is not SQL compliant, and PigPoopGres is the worst, fraudulently declaring in its manuals its redefinition of SQL terms, as “SQL”. In order for this exchange to be resolved in less than two years, please use standard definitions when you use technical terms.

1.
In SQL/ACID compliant platforms, there is the concept of ISOLATION LEVEL. The command is SET TRANSACTION ISOLATION LEVEL. That means that Transactions and a Transaction context is maintained, and the concerns are interference or isolation from Transactions.

THAT DOES NOT MEAN THAT THE CONNECTION THAT EXECUTES
__ SET TRANSACTION ISOLATION LEVEL
IS A TRANSACTION

(Unless you are Chinese ...)
When you are walking on the footpath, you do so because you are aware of vehicles, and you do not want to be hit by one. That does not mean that you are a vehicle. It does not mean that you will encounter a vehicle on the footpath. Indeed, you are walking on the footpath to avoid vehicles.
(Except Chinese, who walk in the road, for the purpose of exchanging their pathetic life for insurance payouts.)

2. In previous posts, you seem to understand that a Transaction is a block of SQL code that is commenced with:
__ BEGIN/START TRANSACTION
and concluded with:
__ COMMIT/ROLLBACK TRANSACTION

If you were using that concept in a loose way, or in case you have any doubt about what a Transaction is, this is the definition:
__ A Transaction is a block of SQL code that is commenced with:
____ BEGIN/START TRANSACTION
__ and concluded with:
____ COMMIT/ROLLBACK TRANSACTION

3. The corollary is this:
__ Any SQL code that is NOT bounded by BEGIN TRAN::COMMIT/ROLLBACK TRAN is not a Transaction.

Since SQL cannot prevent idiots from coding verbs that update the database within Transactions only, it permits INSERT/UPDATE/DELETE (which normal humans do only within a Transaction) outside a Transaction. Much like sex within the context of a sanctified marriage vs sex outside one: it cannot be prevented, but it is illegal; immoral; and the children are crippled due to being outside wedlock.

SQL requires that INSERT/UPDATE/DELETE is executed at ISOLATION LEVEL 3/SERIALIZABLE.

In order to affect INSERT/UPDATE/DELETE without the formal BEGIN::COMMIT TRAN bracket, for the duration of the INSERT/UPDATE/DELETE, the SQL platform has to switch into ISOLATION LEVEL 3/SERIALIZABLE, and after the Statement completes, it switches back to whatever the ISOLATION LEVEL was.

4.1 Note that this does not prevent (a) Phantoms (b) result set Anomalies [both of which are defined in the literature, and therefore it is not defined or redefined here).

4.2 Re the central theme of this thread, which is the prevention of (c) Lost Updates, and (d) Lost Currency, which terms are not defined in the literature, I have defined them in this thread, and in the Transaction Sanity doc. Prevention of [c][d] is beyond SQL; beyond the definitions of ACID in freeware manuals; within the definition of ACID both historically, and in spirit and word. That is known as Optimistic Locking, which has a component in each database table, and a Formalised structure for the stored procedures that execute Transactions.

4.3 Generally, SELECT, both simple and complex is used for reporting, that is, consisting of multiple roes, and from multiple tables, wherein [a][b] is relevant.

Generally, Transactions affect few rows in each table, and consist of a chain of rows, reflecting the data hierarchy. Of course, far more resources on the server are engaged, in particular, contention management resources, and ACID implementation resources (except in fantasy anti-SQL freeware).

4.4 Re your question ...
Because you have not specified an ISOLATION LEVEL, and because SQL has default ISOLATION LEVELS for each different context, before you started your example, the ISOLATION LEVEL is 1/READ COMMITTED.

Therefore:
> select … ;
means:

>>>> SQL
-- ISOLATION LEVEL 1/READ COMMITTED
-- Locks will be held for the duration of each Read operation (page or row)
-- Phantoms and result set Anomalies may occur

SELECT ...
<<<<

Note that that means the SELECT is not in a Transaction.

5.
Therefore:
> start transaction;
> select … ;
> commit;
means:

>>>> SQL
START TRAN
-- ISOLATION LEVEL 3/SERIALIZABLE
-- Locks will be held until end of Transaction, the COMMIT/ROLLBACK TRAN
-- Phantoms and result set Anomalies will not occur

SELECT ...
<<<<

> > One correction. LUW was the original CICS term, and it is still used
> > in IBM/DB2. But it means Transaction, not PoopDePoopGres
> > "transaction", not whatever you reframe it to mean. LUW means
> > a single database Transaction, controlled according to ACID.
>
> So, wrt to your Transaction Sanity example (latest revision),

Sorry. As you can see, it was purposely given in increments. Look at the date in the footer, which identifies the version.

I have updated again just now, to fix a cosmetic error.

> So, wrt to your Transaction Sanity example (latest revision),

I will assume page 7, because that is the solution I gave, Optimistic Locking.

> User 1 executes one LUW/Transaction, which starts at time T3.
> And User 2 executes one LUW/Transaction starting at time T4.

Yes.

> Besides, the SELECTs
> at T1 and T2, respectively, are not part of those Transactions. Is that
> right?

Yes.
As per the legend at the bottom of page 4.
Green is resident and executed in the client app.
Blue is resident and executed in the server, the Transaction stored proc.
The delay between [T1] and [T3] for User 1, and between [T2] and [T4] for User 2, cannot be controlled due to being user interaction.

> If you call the whole stored procedure a Transaction, how do you refer
> to each BEGIN TRAN… COMMIT block?

???

1. Differentiating Stored Procs.
Every man in a prison is a prisoner, a convict. Some of them are murderers. When I get them to sing as a group, I call “all”, when I get to the last verse, I call “just the murderers”.

Stored procs are used to perform myriad functions, not only Transactions.

Transaction stored procs are differentiated from other stored procs by the fact that only Transaction stored procs have (a) a formalised OLTP structure, (b) a BEGIN/START TRAN and a COMMIT/ROLLBACK TRAN, (c) do not return a result set, and (d) return a return-value.

Report stored procs are differentiated from other stored procs by the fact that only report stored procs create tempdb tables, and return a result set.

2. Differentiating Transactions.
The murderer is a murderer because he killed someone. All the convicts cannot be said to have killed someone. Nevertheless, all the men in prison are convicts. The distinction between a convict and a murderer is, the murderer killed someone.

> If you call the whole stored procedure a Transaction, how do you refer
> to each BEGIN TRAN… COMMIT block?

First, I did not call the whole stored proc a Transaction. I can’t explain what I did not say.

I called A stored proc that contains a Transaction a Transaction stored proc.

Second, there is not an “each”, by decree, there is only one BEGIN TRAN...COMMIT block in a Transaction stored proc.

Third, the Transaction stored proc is differentiated from other stored procs by the fact that it contains Transaction verbs, BEGIN/START TRAN and COMMIT/ROLLBACK TRAN. That does not imply that the entire code contained in the Transaction stored proc is a Transaction, in the same way that a Report stored proc does not imply that the entire code contained produces a report. In programming, there is often a setup up; a PREPARE block; an completion block; an error handling block; etc.

Fourth, in a Transaction stored proc, the Transaction content is differentiated from the rest of the stored proc code by the BEGIN/START TRAN ... COMMIT/ROLLBACK TRAN bracket.

In a formalised stored proc (not only Transaction stored procs), the code is separated into formal code blocks, identified by a label.

In case it needs to be confirmed, SQL is a full, low-level, programming language.

>>>> SQL
LABEL: -- must not be a reserved word
__ ...
__ SELECT ...
__ IF ( @@ROWCOUNT != 1 )
____ GOTO ERR_HANDLE
__ ...

ERR_HANDLE:
__ ...
__ RAISERROR
__ ...
__ RETURN -9
<<<<

Fifth, in a FORMALISED OLTP Transaction stored proc, in addition the Transaction content being differentiated from the rest of the stored proc code by the BEGIN/START TRAN ... COMMIT/ROLLBACK TRAN bracket, the Transaction content is FURTHER differentiated by the use of formal code blocks and labels. The formal *Validate* and *Execute* blocks have been defined previously in this thread:
- the *Validate* code block does not contain Transaction control elements
- the *Execute* code block does contain Transaction control elements

>>>> SQL
VALIDATE:
__ ...
__ SELECT ...
__ IF ( @@ROWCOUNT != 1 )
____ GOTO ERR_HANDLE
__ ...

EXEC_UTE:
__ BEGIN TRAN
__ ...
__ COMMIT TRAN

ERR_HANDLE:
__ ...
__ RAISERROR ...
__ ROLLBACK TRAN
__ RETURN -9
<<<<

Sixth, in the Transaction Sanity doc, page 7 Optimistic Locking, the formal OLTP structure for Transaction stored procs is given, in simple pseudo-code form:
- as stated on the page it is not the full Template
--- refer to Daniel’s GitHub link for the full Template
- due to space considerations the code block LABELS are not given
- instead code blocks are differentiated by:
--- separate rectangles
--- the first line of which is the code block name { Validate | Execute }, in bold, as an SQL comment
- GOTOs are avoided
- Isolation level commands are not necessary, but are given as comments for clarity

----

> If you call the whole stored procedure a Transaction, how do you refer
> to each BEGIN TRAN… COMMIT block?

In sum, in a Transaction stored proc, the Transaction is differentiated for the rest of the code, by the BEGIN TRAN::COMMIT TRAN, per the sSQL keywords.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 4, 2021, 5:23:53 AM8/4/21
to
Nicola

Noting that what you are used to, the PoopDePooGres "sql" is *not* SQL, and certainly *not* a programming language, but we have had SQL since IBM released it into the public domain. It is the Relational data sub-language defined by Codd. Of course, each SQL Platform supplier has extensions. In contrast the freeware has substitutions, and a whole pile of extensions that are irrelevant, that ensure that the code is not portable.

With a view to learning what actually SQL is, that it is a full programming language, and specifically what SAP/Sybase Adaptive Server Enterprise [ Transact-SQL ] is, please erase all your notions of SQL; ACID; Transactions that you have acquired, and start with a fresh and open mind. The obstacle to learning this is, as always, any attitude that you know the subject matter, and eg. you just need to learn the Sybase syntax. In particular, do not attempt to perform a task in Sybase in the pig poop way, find out how to do it in the normal commercial SQL way.

1. Visit this page
__ https://help.sap.com/viewer/product/SAP_ASE/16.0.4.0/en-US?task=whats_new_task
2. Select [ Download PDFs ] at top right
3. Choose the manuals you want, and download them. Read them from cover to cover. On the train or whatever.
4. I recommend the following, in order.
__ Installation & Upgrade Guide
__ Transact-SQL Users Guide
__ Reference/Building Blocks
__ Reference/Commands
__ Reference/Utility (especially isql)
__ Admin/System Admin Guide/Volume 1

Feel free to ask me questions.

Cheers
Derek

Nicola

unread,
Aug 4, 2021, 6:13:54 AM8/4/21
to
On 2021-08-04, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> On Wednesday, 4 August 2021 at 06:05:09 UTC+10, Nicola wrote:
>> Ok, let me understand: what are the differences in behaviour between
>>
>> select … ;
>>
>> and
>>
>> start transaction;
>> select … ;
>> commit;
>
> Because you have not specified an ISOLATION LEVEL, and because SQL has
> default ISOLATION LEVELS for each different context, before you
> started your example, the ISOLATION LEVEL is 1/READ COMMITTED.
>
> Therefore:
>> select … ;
> means:
>
>>>>> SQL
> -- ISOLATION LEVEL 1/READ COMMITTED
> -- Locks will be held for the duration of each Read operation (page or row)
> -- Phantoms and result set Anomalies may occur
>
> Note that that means the SELECT is not in a Transaction.

Ok. For understading: let us assume that locks are row-based. Consider
this instance:

Person
Name Age
--------
John 34
Bob 34

And this query:

select P1.Name, P2.Name
from Person P1, Person P2
where P1.Name <> P2.Name
and P1.Age = P2.Age;

Assume that the query is executed using a nested-loop join. Besides,
suppose that an update is concurrently performed:

update Person set Age = 35 where Name = 'John';

Is it correct that in this situation the query can produce at least
three possible results?

1. If the update is performed after the select, then the result is:

John, Bob
Bob, John

2. If the update is performed before the select, the result is empty.

3. But the update can happen *during* the select, in which case the
result may be:

John, Bob

This happens when at the first iteration of the outer loop (John, 34)
is locked (so the update is blocked), then the inner loop scans the
table and outputs (John,34, Bob,34), and only after that the update
takes place (in the meantime, the lock on (John,34) will have been
released). Hence, at the second iteration of the outer loop, the
inner scan will not find anything else to join with (Bob,34).

Is this right? Are locks under these circumstances actually just
latches?

You are correct that in a system such as PostgreSQL there is nothing
like that: the select would see a snapshot of the table at time it is
executed, and would not see any changes by concurrent commands.

https://www.postgresql.org/docs/current/tutorial-transactions.html

«PostgreSQL actually treats every SQL statement as being executed within
a transaction. If you do not issue a BEGIN command, then each individual
statement has an implicit BEGIN and (if successful) COMMIT wrapped
around it.»

>> start transaction;
>> select … ;
>> commit;
> means:
>
>>>>> SQL
> START TRAN
> -- ISOLATION LEVEL 3/SERIALIZABLE
> -- Locks will be held until end of Transaction, the COMMIT/ROLLBACK TRAN
> -- Phantoms and result set Anomalies will not occur

Hence, (3) is prevented.

Nicola

Derek Ignatius Asirvadem

unread,
Aug 4, 2021, 6:17:42 AM8/4/21
to
> On Wednesday, 4 August 2021 at 10:15:03 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Wednesday, 4 August 2021 at 06:05:09 UTC+10, Nicola wrote:
>
> > So, wrt to your Transaction Sanity example (latest revision),
> I will assume page 7, because that is the solution I gave, Optimistic Locking.
> > User 1 executes one LUW/Transaction, which starts at time T3.
> > And User 2 executes one LUW/Transaction starting at time T4.
>
> Yes.
>
> > Besides, the SELECTs
> > at T1 and T2, respectively, are not part of those Transactions. Is that
> > right?
>
> Yes.
> As per the legend at the bottom of page 4.
> Green is resident and executed in the client app.
> Blue is resident and executed in the server, the Transaction stored proc.
> The delay between [T1] and [T3] for User 1, and between [T2] and [T4] for User 2, cannot be controlled due to being user interaction.

If you wish to evaluate and contemplate the difference, eg. what would happen if the Transactions started at [T1] and [T2] respectively, that is the [Naïve Solution] given on page 5.

Note that it:
- breaks the [A]tomic property
--- (half in the client, the rest in the server Transaction Stored proc]
--- or half in one code segment in the client, the rest somewhere else in the client)
- is absolutely prohibited in an OLTP environment because it holds locks during user interaction, which is a period that cannot be controlled.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 4, 2021, 8:05:01 AM8/4/21
to
> On Wednesday, 4 August 2021 at 20:13:54 UTC+10, Nicola wrote:
> > On 2021-08-04, Derek Ignatius Asirvadem wrote:
> >> On Wednesday, 4 August 2021 at 06:05:09 UTC+10, Nicola wrote:
>
> >> Ok, let me understand: what are the differences in behaviour between
> >>
> >> select … ;
> >>
> >> and
> >>
> >> start transaction;
> >> select … ;
> >> commit;
> >
> > Because you have not specified an ISOLATION LEVEL, and because SQL has
> > default ISOLATION LEVELS for each different context, before you
> > started your example, the ISOLATION LEVEL is 1/READ COMMITTED.
> >
> > Therefore:
> >> select … ;
> > means:
> >
> >>>>> SQL
> > -- ISOLATION LEVEL 1/READ COMMITTED
> > -- Locks will be held for the duration of each Read operation (page or row)
> > -- Phantoms and result set Anomalies may occur
> >
> > Note that that means the SELECT is not in a Transaction.
>
> Ok. For understading: let us assume that locks are row-based.

(Ok.
FYI. In Sybase the default for locks is Page, for both data pages and index pages. One can set to:
- data page (indices will be latches)
- data row)

> Consider
> this instance:
>
> Person
> Name Age
> --------
> John 34
> Bob 34
>
> And this query:
>
> select P1.Name, P2.Name
> from Person P1, Person P2
> where P1.Name <> P2.Name
> and P1.Age = P2.Age;

The example is not a relational table (no Key). At best it is a classroom exercise for record filing systems, and it depends on how good or bad the person who wrote the filing system was. What happens is irrelevant to an SQL context, or an ACID context, or an OLTP context. Thus it is not worthy of an answer.

Further, are you asking what an SQL Platform should do, or what Sybase does ?

Nevertheless, I will play along with you, answering for Sybase ASE.

> Assume that the query is executed using a nested-loop join.

Well, it can’t, there is no Key. It can do some other form of join, relevant to the Heap (no Key) structure.

But the important thing to understand here is, an “NLJ” (or other Join) in PoopGres is not at all comparable to an NLJ (or other Join) in Sybase or DB2 or MS.

> Besides,
> suppose that an update is concurrently performed:
>
> update Person set Age = 35 where Name = 'John';
>
> Is it correct that in this situation the query can produce at least
> three possible results?

No. Just two.

For your example, the level of locking and the join type make no difference at all. If your file consists of only those two records, they will be on the same page, and cached in RAM. If your file has a million records, and you have just picked two random records, the two pages will be in memory.

I will assume the default isolation level for the SELECT: READ COMMITTED. The SELECT will not compete with itself, for locks or for pages on disk or for pages in memory.

The SELECT will contend with the UPDATE only if you can manage to obtain microsecond precision across the two connections. That means, in 99.999999999% of the cases, there will be no contention.

If (a) you have the ability to execute microsecond precision across your two connections, AND (b) your precision is correct, you will get the same result on every attempt.

If (a) you have the ability to execute microsecond precision across your two connections, but (b) your precision is incorrect, you will get a different result very rarely, and only if you will have to try it millions of times.

>>>>
In various benchmarks, on a genuine Relational database, using various table structures, many people have tried to produce contention, in this sort of way. It never succeeds, the effort is futile.

The way I run OLTP benchmarks is, I run 100 or 200 threads that perform a complex UPDATE, against 50 threads that perform a complex SELECT, on the same tables with 16 million rows each. TPC style. Contention is produced, but it is miniscule. If the UPDATE is in a proper OLTP Transaction structure, the contention is even less, due to the stored proc being compiled and Query-Planned (resource planned). The SELECT is automatically Query-Planned after the first execution (refer the Statement Cache).
<<<<

> 1. If the update is performed after the select, then the result is:
>
> John, Bob
> Bob, John
>
> 2. If the update is performed before the select, the result is empty.

Those are the only two possibilities. You will get them in a predictable sequence, qualified as per my comments above.

> 3. But the update can happen *during* the select, in which case the
> result may be:
>
> John, Bob

There is no such thing as “during”. The UPDATE runs at SERIALIZABLE, the SELECT runs at READ COMMITTED. Except for microsecond precision, which in any case only affects the sequence ( [1][2] xor [2][1] ), there is no contention to be had, the attempt does not, cannot, cause contention. The SELECT reads the UPDATED row XOR the SELECT reads the un-UPDATED row.

Hence, now, the join type is even less relevant.

> This happens when at the first iteration of the outer loop (John, 34)
> is locked (so the update is blocked), then the inner loop scans the
> table and outputs (John,34, Bob,34), and only after that the update
> takes place (in the meantime, the lock on (John,34) will have been
> released). Hence, at the second iteration of the outer loop, the
> inner scan will not find anything else to join with (Bob,34).
>
> Is this right?

No. That is low-level concerns relevant to a mickey mouse file handling system written by idiots who have never kissed a girl. It it irrelevant in an SQL ACID context.

(Again, you do not know ACID, you think you know ACID from the anti-ACID redefinitions in academic papers and in PiggyGross. The answers to your recent questions are just /how does ACID handle this/, but you don’t realise it.)

> Are locks under these circumstances actually just
> latches?

No. They are row level locks.

Latches are for internal structures (in memory), which includes B-Tree non-leaf levels, and for changing the internal structure on a page (not the data; not the rows) only. The latter is rare, the former is only when the B-Tree is expanded (a level is added or a page is split).

> You are correct that in a system such as PostgreSQL there is nothing
> like that: the select would see a snapshot of the table at time it is
> executed, and would not see any changes by concurrent commands.

An ACID compliant SQL platform does the same.

The difference is, we don’t make a song and dance about it, and we certainly do not agonise over such low-level concerns.

> «PostgreSQL actually treats every SQL statement as being executed within
> a transaction.

Yes, I was trying to explain that to you. Many times.

>If you do not issue a BEGIN command, then each individual
> statement has an implicit BEGIN and (if successful) COMMIT wrapped
> around it.»

Which means, categorically, that it fails to comply with SQL, and it fails to comply with ACID. Because a SELECT is not a Transaction, it should be run at the SET isolation level, not at 3/SERIALIZABLE. Further, there is nothing to COMMIT.

I have been trying to tell you that too, many times. Thanks.

----

Basically, in your post you are telling me that your mickey mouse system does not do what it should not do. Ok.

And then you are trying to say that the low-level concerns that 10,000 mickey mouse developers have, in their herd of hundreds of programs that have no architecture and handle miserable o/s files, should be of concern to 200 PhD level engineers who eliminated those concerns via a server in 1984, and which has no o/s files. No.

This is just one of the great differences between freeware (including Orable) and commercial SQL Platforms. You get what you pay for.

The problem is, consistently, the insane insist that the sane should have their insane problems. No, they are not real to us. They are real only in the land where people expand vast amounts of enrgy to make their fantasies real, physicalising their fantasies.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 4, 2021, 8:25:07 AM8/4/21
to
> The problem is, consistently, the insane insist that the sane should have their insane problems. No, they are not real to us. They are real only in the land where people expand vast amounts of energy to make their fantasies real, physicalising their fantasies.

It is that, and actually, it is worse. You have the academics' Straw Man notion of what a server does; the Straw Man notion of what an Ordinary Lock Manager does (the academics' fantasy "2PL"); the Straw Man notion of the problems that might, just might, happen. In denial of FORTY YEARS of commercial systems that do not have such idiotic problems. I just burn them as you present them.

Over some course of time, as you progress across the great chasm, you will realise that all the academic notions about the real world, are Straw Men, they have the explicit purpose of making the real world look bad, which is the foundation for making their insane fantasies look good. You can burn them yourself.

Cheers
Derek

Nicola

unread,
Aug 4, 2021, 10:34:46 AM8/4/21
to
On 2021-08-04, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> On Wednesday, 4 August 2021 at 20:13:54 UTC+10, Nicola wrote:
>> > On 2021-08-04, Derek Ignatius Asirvadem wrote:
>> >> On Wednesday, 4 August 2021 at 06:05:09 UTC+10, Nicola wrote:
>>
>> >> Ok, let me understand: what are the differences in behaviour between
>> >>
>> >> select … ;
>> >>
>> >> and
>> >>
>> >> start transaction;
>> >> select … ;
>> >> commit;
>> >
>> > Because you have not specified an ISOLATION LEVEL, and because SQL has
>> > default ISOLATION LEVELS for each different context, before you
>> > started your example, the ISOLATION LEVEL is 1/READ COMMITTED.
>> >
>> > Therefore:
>> >> select … ;
>> > means:
>> >
>> >>>>> SQL
>> > -- ISOLATION LEVEL 1/READ COMMITTED
>> > -- Locks will be held for the duration of each Read operation (page or row)
>> > -- Phantoms and result set Anomalies may occur
>> >
>> > Note that that means the SELECT is not in a Transaction.
>>
>> Ok. For understading

I'll try to rephrase my question, which basically asks if you have
statement-level consistency of SELECT queries at READ COMMITTED in
Sybase.

A SELECT statement, which runs at READ COMMITTED, holds (let's say,
page) locks for the duration of each read operation, and not until the
end of the statement. Assume that the SELECT statement is a complex
query accessing many records over a non-negligible time-span (seconds),
and that it is run concurrently with several (serializable) update
operations on the same data.

(1) Is it possible (in Sybase) that the SELECT query returns an
incorrect result, where by "incorrect" I mean a result that is
impossible to obtain when the query is run in a serial context, that is,
in absence of any other concurrent operation?

(2) If the answer to (1) is negative, where does such a correctness
guarantee come from? If, on the contrary, the answer to (1) is
affirmative, can you please describe a situation in which that happens?

I'd say that the answer to (1) is affirmative. For instance, during
a (block) nested-loop join a certain number of pages must be read into
memory buffers in the outer loop. Later on, it may be necessary to evict
such pages from the memory buffers to make room to another block of
pages. Later on, it may be necessary to read the evicted pages again in
the inner loop (think of join of a table with itself). What prevents
concurrent transactions to modify those pages in between the two reads?

Nicola

Nicola

unread,
Aug 4, 2021, 10:46:30 AM8/4/21
to
On 2021-08-04, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> > Because you have not specified an ISOLATION LEVEL, and because SQL has
>> > default ISOLATION LEVELS for each different context, before you
>> > started your example, the ISOLATION LEVEL is 1/READ COMMITTED.

Btw, why does the transaction isolation level even matter? A single
SELECT statement (not enclosed in BEGIN TRAN… COMMIT) is not
a transaction.

Nicola

Nicola

unread,
Aug 4, 2021, 5:40:10 PM8/4/21
to
>> Exercise 18.11 (and its solution) from Silberschatz's "Database System
>> Concepts" seems relevant:
>>
>> https://www.db-book.com/db7/Practice-Exercises/PDF-practice-solu-dir/18.pdf
>
> No.
>
> They are stuck in the usual academic mindset, the examples are
> particularly about 2PL, which is an MV-non-CC artefact, not relevant
> to, or used in SQL OLTP Platforms, because we do not have 2PL. They
> do not understand or reference the "1PL" Lock Manager we have in the
> real world.
>
> We definitely do not "re-execute" a Transaction, the Validate block is
> not a [paraphrased] "first execution without holding locks". The
> notion of re-executing a Transaction [note the compute-intensive !] is
> stupid. The Validate block is a separate formal article.

What the exercise describes is a concurrency control protocol within the
DBMS. So, it's the system that "first execute[s] the transaction without
acquiring any locks…" and then "rerun[s] the transaction using 2PL".
That's transparent for the user.

Nicola

Derek Ignatius Asirvadem

unread,
Aug 4, 2021, 9:50:32 PM8/4/21
to
I have already answered this in detail in this thread. Please read.

Further, it is clearly identified in my Transaction Sanity doc page 2.

> A SELECT statement, which runs at READ COMMITTED, holds (let's say,
> page) locks for the duration of each read operation, and not until the
> end of the statement. Assume that the SELECT statement is a complex
> query accessing many records over a non-negligible time-span (seconds),
> and that it is run concurrently with several (serializable) update
> operations on the same data.
>
> (1) Is it possible (in Sybase) that the SELECT query returns an
> incorrect result, where by "incorrect" I mean a result that is
> impossible to obtain when the query is run in a serial context, that is,
> in absence of any other concurrent operation?
>
> (2) If the answer to (1) is negative, where does such a correctness
> guarantee come from? If, on the contrary, the answer to (1) is
> affirmative, can you please describe a situation in which that happens?

a. You need to read up on what SQL/Isolation Level/REPEATABLE READ means.
b. Then form your question properly, using established specific technical terms.
c. Exclude personal or subjective definitions re "correct>

> I'd say that the answer to (1) is affirmative. For instance, during
> a (block) nested-loop join a certain number of pages must be read into
> memory buffers in the outer loop. Later on, it may be necessary to evict
> such pages from the memory buffers to make room to another block of
> pages. Later on, it may be necessary to read the evicted pages again in
> the inner loop (think of join of a table with itself). What prevents
> concurrent transactions to modify those pages in between the two reads?

Nice speculation.

This thread is about /Stored procedure structure in RDBMS using Lock Manager for transaction isolation/, and the real world of high end implementations. I don't have the inclination to argue speculation against speculation, endlessly.

Further, I have explained your repeated Straw Man arguments and dismissed them. I am dismissing this one as such.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 4, 2021, 10:01:21 PM8/4/21
to
> On Thursday, 5 August 2021 at 00:46:30 UTC+10, Nicola wrote:
> > On 2021-08-04, Derek Ignatius Asirvadem wrote:
> >> > Because you have not specified an ISOLATION LEVEL, and because SQL has
> >> > default ISOLATION LEVELS for each different context, before you
> >> > started your example, the ISOLATION LEVEL is 1/READ COMMITTED.
> Btw, why does the transaction isolation level even matter? A single
> SELECT statement (not enclosed in BEGIN TRAN… COMMIT) is not
> a transaction.

???

> A single
> SELECT statement (not enclosed in BEGIN TRAN… COMMIT) is not
> a transaction.

1. I have laboured to inform you of that. You are on record stating the opposite. Thank God that you have finally got it.
2. If you are now arguing, you are arguing against yourself. If you are not arguing, good, the question is closed, and you have confirmed that you understand the question; the answer; and that it is closed.

> Btw, why does the transaction isolation level even matter?

1. It is funny, because in your other questions in this thread, you seem to understand that, you have the answer. But somehow you have lost that knowledge, in order to ask this question. I don't have the qualifications the are required to assist you.
2. It is not I, but the SQL Committee that declares the requirement, for [Transaction] isolation Level.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 4, 2021, 10:10:17 PM8/4/21
to
Yes, I know all that. It is pathetically idiotic. They understand Two Phased Commit (server protocol) and they are trying to apply it to Transactions. They are in the same category of ignorant drooling idiots as the freaks in TTM, trying to re-define and re-frame Transactions according to their hysterical fantasies

I was not giving a full review of the book, I just identified a couple of their idiocies.

This thread is about OLTP Transactions, objective truth that has not changed, since 1965. It is not about the hysterical speculations of academics, who as evidenced know nothing about implementations, and declare their isolation from the real world as an elitist badge. You keep bringing that filth in, I keep throwing it out. You can stop any time, the result will be the same.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 5, 2021, 1:19:00 AM8/5/21
to
> On Thursday, 5 August 2021 at 11:50:32 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Thursday, 5 August 2021 at 00:34:46 UTC+10, Nicola wrote:

> This thread is about /Stored procedure structure in RDBMS using Lock Manager for transaction isolation/, and the real world of high end implementations. I don't have the inclination to argue speculation against speculation, endlessly.

> > >> >> Ok, let me understand: what are the differences in behaviour between

> >> Ok. For understading

> > I'll try to rephrase my question, which basically asks if you have
> > statement-level consistency of SELECT queries at READ COMMITTED in
> > Sybase.
>
> I have already answered this in detail in this thread. Please read.

With OP's stated goal in mind, when you read the thread, notice how much effort Daniel put into the thread, and that he achieved completion; closure; resolution. Notice how much you put into it, and whether that effort progressed in the direction of the goal.

Further, Daniel will remember the increments in his effort, and the final Template. Will anyone remember the drawn out back-and-forth that achieved precisely nothing ?

Ok, I grant that the constant questioning and argument from the peanut gallery is a programmed response from academia, the classic erection of Straw Man arguments, which are speculations about things that they do not know; that they do not understand. So yes, that was an achievement. Albeit it a perverse one, because it had nothing to do with the goal, and everything to do with perverting it.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 6, 2021, 4:20:09 AM8/6/21
to
Nicola
First, your questions
__ are classic Straw Man arguments, in that academics are clueless about real world; the industry, which is confirmed by their declaration that they are divorced from implementation concerns, and which is evidenced in every instance, such as this thread,
__ eg. locking vis-a-vis Transactions, let alone how a commercial SQL Platform handles locking and Transactions,
__ so they think that the problems THEY HAVE, such as in their pathetic "2PL", which is a fat band-aid on top of their schizophrenic MV-non-CC
__ are very very real, due to
____ a. their isolation from reality, their comfort in their tiled cells, their universe of discourse
____ b. being UNSKILLED AND UNAWARE (please read some real science)
____ c. hundreds of articles circle-jerking about their fantasies, which are passed off as academic papers
____ d. books that celebrate that filth, passed off as textbooks
__ leave you in that silly place, where you think the sane have the problems that the insane have, and worse, you try to impose those insane problems on the sane.

No.

1. The sane do not have those insane problems
2. And we do not have to answer how and why we do not have those problems
__ If we do, we confirm that those problems are real, and degenerate into insanity
__ and fail to do the thing that is begging: to raise the insane out of their insanity.

Thus it is for you to put your insane problems, your Straw Man questions aside, step out of the asylum, and find out how we do things in the real world, such that we simply do not have those insane problems. Again, I commend you for being the only academic to do so. But your progress, over the last ten years of interaction with me, is very slow. Keep going.

Of course, the best thing to do, is install SAP/Sybase ASE and obtain some experience working with serious databases (not classroom exercises); locking; and Transactions. Find out what undamaged, un-perverted humans have been doing during your fifty years of isolation from reality. You have started that. Excellent.
__ In that regard, I am happy to assist. Ask me a direct question, and I will give you direct advice.
__ No open-ended question, no discussion
____ just look at the long-drawn out discussion that I have entertained, this thread; the IDEF1X/Incomplete Specialisation; etc

That will take ten years or so. And, as evidenced, good, considered advice is something that your dismiss immediately, you want to do things your way. In the meantime, as evidenced, you will continue with your Straw Man questions. Please do the following at minimum:

1. Read the Transact-SQL Users Guide
____ at least ch 20 Transactions: Maintaining Data Consistency and Recovery

2. Then read my guide to the Lock Manager
____ https://www.softwaregems.com.au/Documents/Article/Sybase%20Lock%20Manager/Sybase%20Lock%20Manager.pdf
All my Sybase docs are condensed, intended for Sybase DBAs. I have just updated it, and added a bit of detail, imp[roved the clarity, so as to be relevant for novices.

Remember, this is a serious Lock Manager, not comparable to your 2PL filth, which has to be asserted because you guys position commercial SQL Platform Lock Managers as your "2PL" filth, and insist that we have your insane problems. It is so mature and secure, so brilliant in architecture, that it has not changed since 1984. Extended, yes (eg. to handle new Lock Types to support SAP files, eg. add row locks; etc), but changed, no. So read these docs with a fresh mind, to not take your academic baggage with you.

Cheers
Derek

Nicola

unread,
Aug 8, 2021, 9:52:04 AM8/8/21
to
On 2021-08-06, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
> 1. Read the Transact-SQL Users Guide
> ____ at least ch 20 Transactions: Maintaining Data Consistency and Recovery
>
> 2. Then read my guide to the Lock Manager
> ____ https://www.softwaregems.com.au/Documents/Article/Sybase%20Lock%20Manager/Sybase%20Lock%20Manager.pdf
> All my Sybase docs are condensed, intended for Sybase DBAs. I have
> just updated it, and added a bit of detail, imp[roved the clarity, so
> as to be relevant for novices.
>
> Remember, this is a serious Lock Manager, not comparable to your 2PL
> filth, which has to be asserted because you guys position commercial
> SQL Platform Lock Managers as your "2PL" filth, and insist that we
> have your insane problems. It is so mature and secure, so brilliant
> in architecture, that it has not changed since 1984. Extended, yes
> (eg. to handle new Lock Types to support SAP files, eg. add row locks;
> etc), but changed, no. So read these docs with a fresh mind, to not
> take your academic baggage with you.

I have read a couple of documents (the T-SQL guide and the Locking and
Concurrency Control manual). My takeaways:

- yes, it is a serious lock manager (I did not expect anything less from
a high-end commercial product).

- I did not find any concept that you would not find in a database
systems' textbook (and no, the Alice's book is not such a textbook).

- Call it what you like, but ASE/Sybase uses what is known as "rigorous
2PL" to implement repeatable read and serializable:

https://help.sap.com/viewer/a08646e0736e4b4b968705079db4c5f5/16.0.3.7/en-US/a8ea3fd4bc2b1014b6569e800f6bba42.html.

"Applying exclusive locks [...] until the end of the transaction.
Applying shared locks [...] until the end of the transaction".
Textbook definition of rigorous 2PL.

- It uses index-locking to prevent phantoms. Again, no surprise and
pretty much standard textbook material.

- There is a section dealing exactly with the question I have posed:
"Locking for Select Queries at Isolation Level 1"

https://help.sap.com/viewer/a08646e0736e4b4b968705079db4c5f5/16.0.3.7/en-US/a8eb04a3bc2b1014bef8884d8400b0ab.html

Btw, with a mention of how that affects joins. That, plus this (which
is about SQL Server and has some inaccuracies, but overall I think it
is relevant):

https://sqlperformance.com/2014/04/t-sql-queries/the-read-committed-isolation-level

makes me conclude that in general you do not have statement-level
consistency at read committed in SQL Server or ASE.

ASE is a fine implementation (*), based on concepts that have been very
well known in the academic community for a long time (not to say that
they are obsolete! On the contrary!). What's not in the textbooks is the
specific implementation details and system-dependent guidelines that
a manual is expected to provide. Granted, the devil's in the details.
But good graduate students would have no problems grasping such details
(or those of any other system), capitalizing on their academic baggage.

Nicola

(*) Known to the academics. E.g., some time in the '90s, Sybase was used
for lab exercises at Stanford.

Derek Ignatius Asirvadem

unread,
Aug 10, 2021, 5:10:59 AM8/10/21
to
Nicola

> On Sunday, 8 August 2021 at 23:52:04 UTC+10, Nicola wrote:

In order to reduce the lenght of the answer, which yet again has to be an explanation for you, please read this newspaper article. Explain in one or two sentences, what it the most important item that you (as a teaching professor) can glean from the article. In case it needs to be said, it is not a trick question, it will indicate to me just how much I have to explain.

__ https://www.dailymail.co.uk/tvshowbiz/article-9876165/Naked-Afraid-viewers-spot-man-rifle-protecting-contestants-Africa.html

Cheers
Derek

Nicola

unread,
Aug 10, 2021, 4:14:10 PM8/10/21
to
On 2021-08-10, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
> Nicola
>
>> On Sunday, 8 August 2021 at 23:52:04 UTC+10, Nicola wrote:
>
> In order to reduce the lenght of the answer, which yet again has to be
> an explanation for you, please read this newspaper article. Explain
> in one or two sentences, what it the most important item that you (as
> a teaching professor) can glean from the article.

To me that stuff reads as fake as the fake it claims to depict.

Nicola

Derek Ignatius Asirvadem

unread,
Aug 10, 2021, 7:12:46 PM8/10/21
to
Nicola

> On Wednesday, 11 August 2021 at 06:14:10 UTC+10, Nicola wrote:
> > On 2021-08-10, Derek Ignatius Asirvadem wrote:
> > Nicola
> >
> >> On Sunday, 8 August 2021 at 23:52:04 UTC+10, Nicola wrote:
> >
> > In order to reduce the lenght of the answer, which yet again has to be
> > an explanation for you, please read this newspaper article. Explain
> > in one or two sentences, what it the most important item that you (as
> > a teaching professor) can glean from the article.
>
> To me that stuff reads as fake as the fake it claims to depict.

Yes, of course the tv show is fake.
Yes, of course the concept is fake (Big Brother, "progressed" 15 years).
That is obvious.

That aside, did you glean anything of value from the article ?

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 10, 2021, 7:19:10 PM8/10/21
to
Nicola

> On Wednesday, 11 August 2021 at 06:14:10 UTC+10, Nicola wrote:
> > On 2021-08-10, Derek Ignatius Asirvadem wrote:
> >> On Sunday, 8 August 2021 at 23:52:04 UTC+10, Nicola wrote:
> >
> > In order to reduce the length of the answer, which yet again has to be
> > an explanation for you, please read this newspaper article. Explain
> > in one or two sentences, what it the most important item that you (as
> > a teaching professor) can glean from the article.

Another simple question, clarifying only, in order to reduce my labours.

Given your detailed question, AND the example you have cited, at what point in time, do you suggest that the COUNT() would be correct ? I am not asking for a long answer here, just clarifying you query. A few words would be enough.

Cheers
Derek

Nicola

unread,
Aug 11, 2021, 4:45:32 AM8/11/21
to
No.

Nicola

Nicola

unread,
Aug 11, 2021, 5:14:58 AM8/11/21
to
Count() would return the correct result if and only if the returned
value is among the values that some serial execution of the same set of
committed transactions would have returned.

If transaction T1 is scanning a table to count its rows, and
concurrently transaction T2 removes one row and adds two, the only
values T1 should be able to compute are N (the number of rows in the
table before the T1 and T2 starts executing) and N+1. That is because
there are two possible serial executions:

- T1;T2, in which case T1 would count N rows;
- T2;T1, in which case T2 would count N+1 rows.

I wouldn't talk about some "point in time when count() is correct".
Logically, the count is (should be) an atomic (so, logically
instantaneous) operation, so whenever T1 commits, that's when the count
becomes visible, and its result should be correct at that point.

Read committed permits an execution in which:

1. T1 starts scanning the table, counting the row that T2 will delete;
2. T2 executes and commits;
3. T1 keeps scanning the table, now counting the rows that T2 has added.

Then, T1 would return N+2, where the table never had N+2 records.

Nicola

Derek Ignatius Asirvadem

unread,
Aug 11, 2021, 9:43:03 AM8/11/21
to
Nicola

> On Wednesday, 11 August 2021 at 19:14:58 UTC+10, Nicola wrote:
> > On 2021-08-10, Derek Ignatius Asirvadem wrote:
> >
> >> On Wednesday, 11 August 2021 at 06:14:10 UTC+10, Nicola wrote:
> >> > On 2021-08-10, Derek Ignatius Asirvadem wrote:
> >> >> On Sunday, 8 August 2021 at 23:52:04 UTC+10, Nicola wrote:
> >> >
> >> > In order to reduce the length of the answer, which yet again has to be
> >> > an explanation for you, please read this newspaper article. Explain
> >> > in one or two sentences, what it the most important item that you (as
> >> > a teaching professor) can glean from the article.
> >
> > Another simple question, clarifying only, in order to reduce my labours.
> >
> > Given your detailed question, AND the example you have cited, at what
> > point in time, do you suggest that the COUNT() would be correct
> > ? I am not asking for a long answer here, just clarifying you query.
> > A few words would be enough.

> If transaction T1 is scanning a table to count its rows, and
> concurrently transaction T2 removes one row and adds two, the only
> values T1 should be able to compute are N (the number of rows in the
> table before the T1 and T2 starts executing) and N+1. That is because
> there are two possible serial executions:

False. We are not serialised.

----

> I wouldn't talk about some "point in time when count() is correct".

Well, you are implying it. And the drooling imbecile that you cite (science doesn’t need citations; fantasy does) makes it quite explicit. You can’t have it both ways.

We are discussing READ COMMITTED, therefore T1 is definitely not a Transaction. Your naming it as such will confuse you. By the Grace of God, your fraudulent use of technical terms does not confuse me. This naming of non-transactions as “transactions” is a particular fixation of yours.

I will continue the exchange on the basis of the scenario you have described, which is [S1] executing a COUNT, on an online shared database, and [T2] inserting1 and deleting 2 rows in the same table, concurrently, meaning that [T2] happens at a point in time that is after the [S1] first read and before the [S1] last read.

Note, that if [T1] was a Transaction, then it would be SERIALIZABLE, and the desperately fantasised “problem” would not occur.

Note, in Sybase; DB2; MS, you can alternately NOT open a Transaction, but instead, SELECT ... AT ISOLATION REPEATABLE READ, and your dearly beloved fantasy of a “problem” would not occur. Which of course, the freeware herds cannot.

----

> Logically, the count is (should be) an atomic (so, logically
> instantaneous) operation,

On which planet or multiverse or pubertal fantasy does that happen ? Rhetorical. Don’t answer that. Please.

Get real. That is an idiotic proposition. In reality, there is no such thing. And please, we do not need to hear about what your perception of “logic” is.
_ in the real world, SQL and ACID, count is not atomic or “atomic”. Count is count.
___ if you want atomic, you have to open a Transaction
___ you have not opened a Transaction
___ therefore, in the real world, your proposition is stupid, fantasy, in contradiction of the real world
_ in the real world, nothing is instantaneous.
___ reading a large table is definitely not instantaneous
___ (I accept, that in the asylum, the knuckle-dragging freaks fantasise, that the girls whom they repulse desire them; that the non-instantaneous is “instantaneous”; that the moon is made of cream cheese; etc. They are legally said to be not responsible, because they cannot handle reality.)
___ you cannot transform something that is physically non-instantaneous into “instantaneous”, with or without a note from your mother, with or without good drugs or happy gas
___ (you can, if and only if you are in the asylum, and if and only if you have taken your medication)
___ it is not “logically instantaneous”, the proposition is logically absurd, because it is in denial (schizophrenic) of reality, that AS DESCRIBED BY YOU, the table is in an online shared database, AND it is being changed

----

> so whenever T1 commits, that's when the count
> becomes visible, and its result should be correct at that point.

1.
Corrected as defined above:
> so whenever [S1] commits, that's when the count
> becomes visible, and its result should be correct at that point.

Corrected further:
Since we are discussing READ COMMITTED, and [S1] is not a Transaction, it cannot COMMIT (you can, but it does nothing, it only satisfies your fixation on your fantasy).

REPEATING
>>>
Note, that if [T1] was a Transaction, then it would be SERIALIZABLE, and the desperately fantasised “problem” would not occur.

Note, in Sybase; DB2; MS, you can alternately NOT open a Transaction, but instead, SELECT ... AT ISOLATION REPEATABLE READ, and your dearly beloved fantasy of a “problem” would not occur. Which of course, the freeware herds cannot.
<<<<<

2.
There you go, stating a point in time, while rejecting the notion of a point in time.
You said:
> I wouldn't talk about some "point in time when count() is correct".
Now you are saying:
> its result should be correct at that point.

Look, I am happy to help, but I really do not have the qualifications that are required for the kind of help that you need.

This is a public newsgroup. It might be a good idea to stay away from computers. You will be so much more successful at running a pig farm, and so much happier too.

----

> Read committed permits an execution in which:
>
> 1. T1 starts scanning the table, counting the row that T2 will delete;
Corrected:
> 1. [S1] starts scanning the table, counting the row that T2 will delete;

a.
You are confusing the present time (while rejecting a point in time !!!) with some event, at a specific point in time, which point in time is before the point in time that [S1] completes.

b.
There is no reason (Logic) why [S1] should know about or care about what [T1] is doing.

c. Corrected further:
> 1. [S1] starts scanning the table, counting the rows, oblivious to other activity, by virtue that it declaratively runs at READ COMMITTED.

REPEATING
>>>
Note, that if [T1] was a Transaction, then it would be SERIALIZABLE, and the desperately fantasised “problem” would not occur.

Note, in Sybase; DB2; MS, you can alternately NOT open a Transaction, but instead, SELECT ... AT ISOLATION REPEATABLE READ, and your dearly beloved fantasy of a “problem” would not occur. Which of course, the freeware herds cannot.
<<<<<

----

> 2. T2 executes and commits;
> 3. T1 keeps scanning the table, now counting the rows that T2 has added.
Corrected:
> 3. [S1] keeps scanning the table, without regard to other activity

Which somehow is “incorrect” to you. Only in some sick fantasy.

> Then, T1 would return N+2, where the table never had N+2 records.

Which point in time is that “never”, given your rejection of a point in time ?

By the time you read the result set (one row with the count), the table count would have changed. By the time you execute the SELECT again, the table count would have changed yet again.

I would say that the point at which “never” occurred, never occurred, but you are incapable of understanding that (not the English, your English is great, but the meaning).

Corrected
> Then, [S1] would return N+2

So freaking what. It is a table COUNT at READ COMMITTED, in an online shared database, on a table that is known to have concurrent activity.

----

> Count() would return the correct result if and only if the returned
> value is among the values that some serial execution of the same set of
> committed transactions would have returned.

I reject that as a definition, Sybase; DB2; MS rejects that as well.
Why ?
Because we do not need an idiotic definition from the asylum.
Because we already have a definition in (a) SQL, and (b) ACID, and have had, for FORTY YEARS.
Wherein COUNT at READ COMMITTED works perfectly.

Before the freaks in the asylum concocted their fantasies, according to the only thing they can understand, which is single-threaded (“serialised”) operation in the herd of bloated, flatulent programs (“server”), executing on a “snapshot” of the database, which is by definition a fantasy, that never existed.

> - T1;T2, in which case T1 would count N rows;
> - T2;T1, in which case T2 would count N+1 rows.
Corrected:
> - [S1];[T2], in which case [S1] would count N rows;
> - [T2];[S1], in which case [T2] would count N+1 rows.

Nonsense.

You contradict yourself.

You said S1 and T2 ran concurrently. Now you are saying that according to your fantasy (not the reality of S1 & T2 running concurrently, which you said you understood), about how a sow excretes its poop while you are under it, you can only understand single-threaded execution (“serialised” !!!) with Statements that are not atoms by definition, magically transformed into “atoms”.

You can’t make this stuff up, you have to eat pig poop.

Corrected further:
In the real world, that is SQL; ACID, and concurrent processes (which you seem to understand from your fantasy in the left hemisphere, but deny in the fantasy in your right hemisphere),

- [S1] and [T2] run concurrently
- [S1] is not a Transaction, and thus runs at isolation READ COMMITTED
- [T2] is a Transaction, and thus runs at isolation SERIALIZABLE
- at some point in time, before [S1] completes, [T2] completes
- [S1] produces a COUNT at READ COMMITTED
- because [S1] is not isolated from other activity, by declaration, the COUNT produced by [S1] is not isolated from other activity

Therefore, only certified and straitjacketed morons would attempt to relate the unrelated; to isolate the not-isolated
- because it can’t be done in reality,
- but can be done in the imagination, which is made easier by the heavy obsessive usage, due to isolation from reality.

There are quite a few pig farms within 100km of Udine, that are available at very reasonable prices.

----

> On Sunday, 8 August 2021 at 23:52:04 UTC+10, Nicola wrote:
> > On 2021-08-06, Derek Ignatius Asirvadem wrote:

> Btw, with a mention of how that affects joins. That, plus this (which
> is about SQL Server and has some inaccuracies, but overall I think it
> is relevant):
>
> https://sqlperformance.com/2014/04/t-sql-queries/the-read-committed-isolation-level
>
> makes me conclude that in general you do not have statement-level
> consistency at read committed in SQL Server or ASE.

That point is now proved false, hysterically stupid, it is now closed.

As I have proved, your cranium is infected with porcine excreta, and your left hemisphere is at war with your right hemisphere. You are in danger of a stroke.

Try eating rice, it has good binding properties, it will reduce the diarrhoea. Get some sleep in a dry bed, the warm furry sows are not good for you.

Do not code in SQL in any way, shape or form. As per the various “problems’ you raise, you are right, typical of academics, you have really really really good “reasons” why SQL and ACID do not work, and thus you have a good “excuse” for never actually writing code. Excellent. Just fantasise.

----

> But good graduate students would have no problems grasping such details
> (or those of any other system), capitalizing on their academic baggage.

Yes, you have proved that quite well, yet again, that it is in fact indoctrination into schizophrenia, passed off as “education”. Thank you.

They have been nicely indoctrinated into erecting fantasies about why what they were hired to do, cannot be done, and never actually producing anything. As is the common experience in the real world.

----

Re your citation friend Paul White. First, yes of course, he is in the same filthy pig sty as you academics, because he has succumbed to your heavily marketed pig poop, and he erects the same fantasies, and confuses fantasy for reality. So again, he is labouring, slavishly, on a fantasy treated as real. The imbecile is so caught up in fantasy, he too, cannot remember the difference between READ COMMITTED and REPEATABLE READ, and drunk on the Kool-Aid, he mixes them up.

But there is an important second point: anyone who has even pedestrian knowledge of SQL on a genuine SQL Platform, would know that [separate to the count changing constantly because the table is active], that that is not a “problem”, that that is not the way to obtain a COUNT on a large table. He proves by his own actions that he is not the performance expert that he declares himself to be, that he is in fact a performance dummy.

----

> On Sunday, 8 August 2021 at 23:52:04 UTC+10, Nicola wrote:

I will respond to the rest of that post later.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 11, 2021, 10:37:49 PM8/11/21
to
Nicola

> On Wednesday, 11 August 2021 at 18:45:32 UTC+10, Nicola wrote:
> > On 2021-08-10, Derek Ignatius Asirvadem wrote:
> >> On Wednesday, 11 August 2021 at 06:14:10 UTC+10, Nicola wrote:
> >> > On 2021-08-10, Derek Ignatius Asirvadem wrote:
> >> > Nicola
> >> >
> >> >> On Sunday, 8 August 2021 at 23:52:04 UTC+10, Nicola wrote:
> >> >
> >> > In order to reduce the length of the answer, which yet again has to be
> >> > an explanation for you, please read this newspaper article. Explain
> >> > in one or two sentences, what is the most important item that you (as
> >> > a teaching professor) can glean from the article.
> >>
> >> To me that stuff reads as fake as the fake it claims to depict.
> >
> > Yes, of course the tv show is fake.
> > Yes, of course the concept is fake (Big Brother, "progressed" 15 years).
> > That is obvious.
> >
> > That aside, did you glean anything of value from the article ?
>
> No.

__ https://www.dailymail.co.uk/tvshowbiz/article-9876165/Naked-Afraid-viewers-spot-man-rifle-protecting-contestants-Africa.html

The main thing to glean from this article is this. Look at the comments. From the evidence, there exists people who actually think that the obviously fake “reality” shows, on a machine that reproduces coloured pixels on a screen, from a film that captures paid actors doing some activity, is “real”. And they are heart-broken to have their fantasy punctured.

Why is that ? Why is it that humans who are capable enough to write comments in English on the twitter-verse (for twits), have the stupefying thought that fantasy is “real”.

Because at this time, we have had decades (centuries, if you include the sabotaged philosophy and ape-man “science”) of people in authority erecting fantasies, and then treating them as real. This includes academics, who make a “science” out of fantasising. This includes all university students who are taught the art of fantasising, as “science”, by their professors.

A few years ago, I went back to uni and did two semesters, in order to determine for myself why our new hires were schizophrenic, why they give really really really good reasons for NOT doing the job they were hired to do, that they said that were very good at. The evidence is, they teach schizophrenic thinking as “normal”, as an art, as “science”.

Science is about knowledge, knowledge of reality.

Fantasy occurs in the imagination, not in reality. Fantasy is anti-science. But they have made a “science”out of it.

What we are dealing with here is the COLLECTIVE fantasy. That is, “taught” and maintained and believed by a bunch of asylum dwellers, and therefore they re-inforce each other, and therefore the fantasy is made very very very “real”. Same as a male fantasising that he is female, which is in and of itself evidence of insanity, but made acceptable because many degenerates have the same fantasy, a COLLECTIVE fantasy. And now weaponised to make war on humans.

----

Here, you are the academic, pushing a ridiculous fantasy, as “science”, citing idiotic circumstances in the tiny corners of the Bell curve, that do not apply (or apply only to idiots who are ignorant of the commands that eliminate the ”problem”).

The Stonebraker religion (make no mistake, it is a de facto religion, faith-based, not science-based) is that in a multiverse somewhere, there is a special place for special-needs people, where:
1. everyone recites a mantra “readers do not block writers, writers do not block readers”, prayerfully, without ceasing
2. denies the reality that the database is an online shared resource that is changing all the time due to activity of hundreds of users
__ (that therefore requires concurrency control in order to provide integrity)
3. erects a fantasy that the database is a ”snapshot”
__ (which is physically impossible, because as Daniel has pointed out, by the time the reader has passed the first few pages, those pages are out-of-date, obsolete, and no longer in the snapshot that the snapshot is supposed to be)
____ SAME AS YOUR POINT, RE YOUR PATHETIC CITED EXAMPLE
____ OF COUNT NOT PERFORMING AS THE FANTASY “COUNT”
____ YOU CAN SEE THAT IT IS “TRUE” FOR THE FANTASY
____ BUT YOU CAN’T SEE THAT IT IS TRUE FOR REALITY
____ IT IS CULTIVATED SCHIZOPHRENIA
4. taken at a point in time
__ (which you /now/ say is not possible, “never happened”, but evidently you cannot see that it is a general rule about the ”snapshot”).
5. Which of course is a grand lie, because they themselves have to implement a primitive 2PL (not the “2PL” which is used to attack 0PL), in order to control concurrency which they say is controlled in their mythological “MVCC”, but it not. that breaks their own mantra, readers block writers sometimes, writers block readers sometimes, and they have to deal with it.

Anyone with half a brain would realise that if they implemented a proper Lock Manager, they could eliminate the massive burden of making the fantasy “real”, the storage of masses of version. Which the commercial SQL Platforms have done for FORTY YEARS. But the evidence is, they do not have even half a brain, instead they repeat their mantra, prayerfully, without ceasing.

By the way, we do not have “read locks” and “write locks”, the notion is too primitive, it exists only in primitive 2PL that is used to make MV-non-CC work, even just a little. Again, not comparable to a real Lock Manager.

----

The only true snapshot is a single frame, as in a hardcopy photograph, or a JPEG file. “See, that was the moment in time that my wife and I sipped a cocktail on the beach in Bali.” In database terms, the only true snapshot is a single page.

Take the production of a movie. They carefully make sure that the frames are NOT related; that they are NOT a sequence (in order to build the plot), they are quite aware that it is a roll of film, that they fabricate, that it is not reality (it is a story, remember). No problem, everyone knows that when they go to the cinema, they have to “suspend belief” and entertain the movie as “real”, that is what they are paying for.

In the real world, there is no snapshot of more than one frame.

But the lunatic asylum treats the entire roll of film (moving picture) as a “snapshot”. Yes, it exists as a fabrication, copying a roll of film may well be an “atomic” task. No, the first frame is unrelated to the 42nd frame, etc. No, the content of the film does not exist in reality (it was produced according to a script, employing paid actors).

Applying that snapshot notion to a database that does exist in reality is pure insanity, it is not a snapshot but a “snapshot”. The first few pages may be true, but by the time the reading mechanism gets to the next few pages, they are out-of-synch with the first few, and totally unrelated when it gets to the last few pages.

Think about this, a database backup in not the “snapshot”, it cannot be, it is a real series of single frames (pages). It is not taken at a “point in time”, because no such thing exists. Each frame (page) is taken at a different point in time. They are quite aware of that when it pertains to the database backup, but they hysterically deny that when the concoct their “snapshot” at a “point in time”.

You do not get it, that that “problem” you propose that happens at READ COMMITTED only because you are too ignorant to execute it at REPEATABLE READ, happens all the time in your fantasy “snapshot” of the “database” at “point in time”, that never happened. But you insist that your fantasy “snapshot” (which never actually happens when you make-real your fantasy as in a real database backup) should happen in your re-defined notion of reality.

You can’t make this stuff up.

----

Oh, yes the you can, and you do, and you teach it as “education”. It is, as evidenced, indoctrination into insanity, in particular, schizophrenia.

As evidenced in the newspaper article, you actively contribute to the common problem of COLLECTIVE insanity, of people thinking that the COLLECTIVE fantasy is somehow “real”. And they are shocked when the fantasy is punctured.

Whereas you double down, and try to (a) re-inforce the unreal fantasy, and (b) impose the idiotic problems that only happen in the fantasy, onto reality. Further (c) you still can’t understand online shared access (let alone concurrency control or multi-threaded architecture), you have to “serialise” (single-thread) your collection of frames (pages), in order to get some sense out of it, so (d) you impose that hysterically stupid “solution” for the “problem” that we do not have, onto us.

When you were a child, did you not hear the children’s’ story, replete with a nursery rhyme, about the three men in pointed hats that stood on a bridge in a small village, looked down at the river, and claimed that the moon had fallen into the river, and life on earth was about to end ? The first verse was funny, even silly. But the second verse, when the men said they could extract the moon from the river and prevent the disaster, if only the villagers paid them some money, identified the evil.

As with all children’s fables, it is not a fairy tale, it is a teaching story, with deep meaning.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 12, 2021, 2:08:28 AM8/12/21
to
Nicola

> On Sunday, 8 August 2021 at 23:52:04 UTC+10, Nicola wrote:

Dealing with the rest of that post.

> > On 2021-08-06, Derek Ignatius Asirvadem wrote:

> > 1. Read the Transact-SQL Users Guide
> > ____ at least ch 20 Transactions: Maintaining Data Consistency and Recovery
> >
> > 2. Then read my guide to the Lock Manager
> > ____ https://www.softwaregems.com.au/Documents/Article/Sybase%20Lock%20Manager/Sybase%20Lock%20Manager.pdf
> > All my Sybase docs are condensed, intended for Sybase DBAs. I have
> > just updated it, and added a bit of detail, imp[roved the clarity, so
> > as to be relevant for novices.
> >
> > Remember, this is a serious Lock Manager, not comparable to your 2PL
> > filth, which has to be asserted because you guys position commercial
> > SQL Platform Lock Managers as your "2PL" filth, and insist that we
> > have your insane problems. It is so mature and secure, so brilliant
> > in architecture, that it has not changed since 1984. Extended, yes
> > (eg. to handle new Lock Types to support SAP files, eg. add row locks;
> > etc), but changed, no. So read these docs with a fresh mind, to not
> > take your academic baggage with you.
>
> I have read a couple of documents (the T-SQL guide and the Locking and
> Concurrency Control manual).

You are best advised to read the T-SQL UG *FIRST*, because it is an explanation, the P&T series is reference, not explanation, so read that *SECOND*, after you understand the explanation. Otherwise you will continue finding “faults” and “problems” where there are none, which is your documented occupation.

> My takeaways:
>
> - yes, it is a serious lock manager (I did not expect anything less from
> a high-end commercial product).
>
> - I did not find any concept that you would not find in a database
> systems' textbook (and no, the Alice's book is not such a textbook).

You dishonestly fail to mention the fact that the textbooks were written AFTER Britton-Lee, AFTER Sybase. Your point is null.

The other ramshackle book that you linked is no better, no worse, than the Alice book. They fail totally in teaching science, they succeed well in teaching how to suck a pigs rectum, which is professed to be “science”. Every link you propose, presented as “knowledge”, has proved to to be sows milk on top of a fragment of science.

> - Call it what you like, but ASE/Sybase uses what is known as "rigorous
> 2PL" to implement repeatable read and serializable:
>
> https://help.sap.com/viewer/a08646e0736e4b4b968705079db4c5f5/16.0.3.7/en-US/a8ea3fd4bc2b1014b6569e800f6bba42.html.

There is no mention of a “2PL” or “rigorous 2PL” in that.

I don’t call it “2PL”, I detest calling something what it is not, that is fraudulent, anti-science. The term is something the academics created, it sort of explains their two phases (they do have a hexpansion phase and a contradiction phase). Calling their filth, as implemented to provide some concurrency control in the MV-non-CC herd of programs, 2PL, is correct.

Calling the Sybase; DB2; MS, Lock Manger “2PL” is false. It will prevent you from finding out what a real Lock Manager is. Further, as explained in detail earlier in this thread, and again in the Transaction Sanity doc, it is a Straw Man, erected by anti-scientists, to demean and discredit something they (as evidenced) cannot comprehend.

The point I made, which stands unaffected, is that it is idiotic for anyone to call something what it is not. Repeating, it is the anti-science academics that say that commercial SQL Platforms are “2PL”
____ WHICH IS FALSE
and as such, as a false “2PL”, that we somehow have the same “problems” that their idiotic 2PL lock managers have. It is a Straw Man.

You too have done that, just look at your questions. You are still doing it. You are the one doing this “calling it what you like” stupidity. You can stop making a fool of yourself any time you like.

If you genuinely understand what you are saying NOW, you would realise that your questions pertain to the poor implementation, your 2PL, and not to the real 1PL lock managers.

> - Call it what you like, but ASE/Sybase uses what is known as "rigorous
> 2PL" to implement repeatable read and serializable:

Go on, name the two phases in the Sybase Lock Manager.

If you hide behind “what is known as”, then take responsibility for the “what is known as”, because it is you asylum screechers that have written the “literature” such that “what is known as” is what is known as.

I just do not play your stupid game of calling things what they are not.

> "Applying exclusive locks [...] until the end of the transaction.
> Applying shared locks [...] until the end of the transaction".
> Textbook definition of rigorous 2PL.

Except that you are fraudulent in applying that to a non-2PL server.
And that the textbook was written after Sybase.

What you have is a fat hairy sow, with a penchant for academics. Sybase is a horse. You say that a pig is the same as a horse. I say, pig poop. You say a macho hairy boar can do anything a horse can do, especially when it is rigorous. I say, fiddlesticks, fresh warm pig poop, straight from the sow. More like rigor mortis.

> - It uses index-locking to prevent phantoms. Again, no surprise and
> pretty much standard textbook material.

Except for the fact that Sybase came first, and the texbooks came second, therefore the inference is false.

> - There is a section dealing exactly with the question I have posed:
> "Locking for Select Queries at Isolation Level 1"
>
> https://help.sap.com/viewer/a08646e0736e4b4b968705079db4c5f5/16.0.3.7/en-US/a8eb04a3bc2b1014bef8884d8400b0ab.html

I don’t see how that has anything to do with your questions, which were your particular speculations about how low level operations could be, should be, wannabe, please be, problematic. The usual insanity of imposing the problems of the insane, onto the sane. It is such an obsession with you, that even after it is pointed out, you cannot stop.

I tried to explain the operation of the Sybase 1PL Lock Manager, and failed. Daniel did a better job, and it appeared that you accepted it. Probably not. Now you have found a relevant page is the manual. Excellent.

The section is simply a definition re how a particular sequence of commands works. Not about “exactly your question”.

> Btw, with a mention of how that affects joins.

Yes, of course. So what. Even our joins are superior to the way you guys speculate about joins forty years after we had them.

The Sybase manuals are not as good as they were before the acquisition by SAP. If I were to give that definition, I would include the fact that it intent-locks UP the [SELECTed] tree as well, ie. the PK page/row in the parent table that is referenced by the FK in the child table which is named in the SELECT command. But hey, that is only a reference manual. And you would not understand it anyway, because it is relevant only to Relational databases.

Do you know wtf the difference is, between APL and DPL/DRL lock schemes (tables). Sure, you can state the words, but do you know the RELEVANCE. No. Do you know why there is a difference at the level described in that section ? No. But somehow, you say, it applies to your question, which was before you read that manual.

> That, plus this (which
> is about SQL Server and has some inaccuracies, but overall I think it
> is relevant):
>
> https://sqlperformance.com/2014/04/t-sql-queries/the-read-committed-isolation-level
>
> makes me conclude that in general you do not have statement-level
> consistency at read committed in SQL Server or ASE.

God help me.

Destroyed, in my previous post. In sum:

1. It is a sign of desperation, when an academic cites a non-academic blog post. Usually it is the other way around.
2. It is stupid to expect a COUNT on a large active table in an online shared database, to be accurate in reference to any fixed notion, particularly when the fixed notion does not exist, has never existed. Humans who have not been indoctrinated into schizophrenia are not that stupid.
3. It is Oedipus stupid to expect such a COUNT at READ COMMITTED to be executed as REPEATABLE READ, when the manual clearly states that you can obtain such a COUNT at REPEATABLE READ or SERIALIZABLE.
3. Last but not least, anyone with actual experience on a genuine SQL Platform can obtain such a meaningless but accurate count without having to plod through each and every page in the table, SQL-Standard-wise. A true instantaneous COUNT, and a true point in time.

----

> ASE is a fine implementation (*), based on concepts that have been very
> well known in the academic community for a long time

Lies, more filthy lies. “Based on” is false. The father is not based on the son.

The historical evidenced facts are:
___ Britton-Lee & Sybase
___ Lock Manager with 66 lock types (at 15.7, even more in 16, without changing the Lock Manager)
__ academia pushed the Stonefreaker MV-non-CC as “MVCC”. It never worked in Ingres, it has not worked in PigPoopGres. It never will. They added their 2PL on top on their MV-non-CC, to get it to work at all. It still doesn’t work.
__ They only know their 2PL.
__ Sybase, DB2, and MS do not have 2PL
__ But they, and you here, falsely re-frame a Lock Manager as your hysterically stupid 2PL.

Therefore “based on” is disgusting filth.

You can call it what you like. You can reframe and redefine terms until the cows come home. Reality does not care what you call it, it is not affected by what you call it.

> (not to say that
> they are obsolete! On the contrary!). What's not in the textbooks is the
> specific implementation details and system-dependent guidelines that
> a manual is expected to provide. Granted, the devil's in the details.

Which, as evidenced, is beyond the ken of those who write the textbooks and academic papers. They cannot even copy-paste, they have to deny the reality of forty years, and re-invent the wheel from scratch.

Forty years, and counting.

> But good graduate students would have no problems grasping such details
> (or those of any other system), capitalizing on their academic baggage.

Sure. Like when I give them a task to obtain a COUNT() from a table, and instead they give me an hysterical slew of “reasons” why COUNT() doesn’t count, or that there is no point in time that count is accurate, or that count must be accurate to a point in time that does not exist. I enter the few keystrokes required into their session, on their computer, without hitting RETURN, I ask them to watch the second hand on their wrist watch, and to call out the second they wish to have the COUNT for, and enter RETURN. Most hackademicks lose control of their bowels. On two occasions they quit overnight and did not even have the courtesy to have a resignation meeting in person. Another committed suicide a month later.

Sure. It is only when the rubber hits the road, when they get a job in the real world, that their baggage is exposed for what it is, and those not totally indoctrinated get rid of it. Those attached to said baggage, have difficulty dealing with reality, that truth is simple, it is only falsity that is complex, they don’t last three months. They get very hurt when we laugh at their voluminous speculations of disaster instead of writing code.

Sure. They have “no problems grasping such details”, but they are impotent in determining a resolution, they argue endlessly about the “grades” details. Meanwhile I give the capable developers who gratefully do not try to “grasp such details” a Template, and say do this. And they do it. Hint: check out how Dan has progressed in his thread, vs how you argue endlessly, with utter futility, and you still cannot grasp the even the principles, while labouring over details.

Make no mistake, this is a life-or-death struggle, that ends in death. Suicide is irrefutable evidence of mental illness. The problem these days is, now that mental illness is being “normalised”, now that schizophrenia is being taught at universities (your various arguments, eg. how a COUNT() could be sabotaged to fail ... and therefore it applies to ALL Statements), mental illness is not perceived for what it is, it goes untreated, and we find out after the person has killed themselves.

I have a friend, 35yo and very intelligent on the IQ scale, hopelessly insane on the intelligence scale. PhD in AI. Three jobs in three years, and it is always the employers fault, never hers. Under tremendous pressure of her own making. I tried to help her because I was expert in a certain vertical in one of her jobs. But she was entrenched in RFS+”Ontology”+”Description Logics”, while begging for a real Relational database (NO ”Ontology”, NO ”Description Logics”). She has a 9yo son. Divorced. Hates men except when she wants sex, and then she turns into a siren. Her sister and I are the only ones who address her denial of reality (schizophrenia), who are trying to prevent the predictable.

Make no mistake:
__ Truth = Reality = life
__ Falsity = Rejection of Reality = death

__ Sanity = conforming the intellect to Reality
__ Insanity = desperately try to conform reality to the mind, which after a lifetime of failure, ends in suicide.

Academics foster death. It is evil.

> (*) Known to the academics. E.g., some time in the '90s, Sybase was used
> for lab exercises at Stanford.

1. And notably not taught. Academic filth taught instead. Same as academics pushing ERD, never mentioning IDFE1X for FORTY YEARS.

2. “Known to academics” is a general statement, because it implies academics generally, as such, it is a pitiful, bald-faced lie.

3. The truth is, yes, it was at Stanford. No, it was squashed by the Stonebraker groupies at Berkeley, who were pushing their MV-non-CC barrow. These decisions are political, not scientific.

4. If it had succeeded at all, they would have been using Sybase since then.

5. Unfortunately they did not study the Sybase Lock Manager, or even copy-paste the manual. As evidenced, they remained vociferously ignorant∑ and erected a Straw Man by descriing it as their primitive 2PL.

Cheers
Derek

Nicola

unread,
Aug 12, 2021, 4:52:19 AM8/12/21
to
On 2021-08-11, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> > Given your detailed question, AND the example you have cited, at what
>> > point in time, do you suggest that the COUNT() would be correct
>> > ? I am not asking for a long answer here, just clarifying you query.
>> > A few words would be enough.
>
>> If transaction T1 is scanning a table to count its rows, and
>> concurrently transaction T2 removes one row and adds two, the only
>> values T1 should be able to compute are N (the number of rows in the
>> table before the T1 and T2 starts executing) and N+1. That is because
>> there are two possible serial executions:
>
> False. We are not serialised.
>
> We are discussing READ COMMITTED,

Ok, I have misunderstood your question. The above is a very informally
stated definition of correctness of a schedule (i.e., serializability).

So, if you ask at what point in time a result of a query run at READ
COMMITTED is correct, the answer is: in general, never. Note that this
answer doesn't rule out situations in which you can guarantee a correct
result even at that level.

>> 1. [S1] starts scanning the table, counting the rows, oblivious to
>> other activity, by virtue that it declaratively runs at READ
>> COMMITTED.
>> 2. T2 executes and commits;
>> 3. [S1] keeps scanning the table, without regard to other activity
>
> Which somehow is “incorrect” to you.

Yes, according to the definition of "correctness" (which you affirm to
reject) as equivalence to a serial schedule.

Sorry, what is the definition of "correctness" according to the
Standards?

>> Count() would return the correct result if and only if the returned
>> value is among the values that some serial execution of the same set of
>> committed transactions would have returned.

> I reject that as a definition, Sybase; DB2; MS rejects that as well.

Then, again, please provide a definition of "correctness". Because you
want to be confident that your queries produce "correct" results, don't
you?

Since anything can happen concurrently during the execution of a query
at READ COMMITTED, how can you tell apart the result of a query
returning an integer from

select cast(rand() * 1000000) as int)

?

> Wherein COUNT at READ COMMITTED works perfectly.

Ok if you are referring to other ways of counting. But if you mean that
counting by scanning the table at READ COMMITTED works perfectly, please
elaborate on what "works perfectly" means.

> But there is an important second point: anyone who has even pedestrian
> knowledge of SQL on a genuine SQL Platform, would know that [separate
> to the count changing constantly because the table is active], that
> that is not a “problem”, that that is not the way to obtain a COUNT on
> a large table.

Sure. But it still illustrates the point, which is about the kind of
issues you may run into when running queries at READ COMMITTED. It is
not suggesting that you *should* run those queries at READ COMMITTED.
Counting was not the best example; but you can replace the query with
some other computation.

Nicola

Derek Ignatius Asirvadem

unread,
Aug 12, 2021, 11:37:40 PM8/12/21
to
Nicola

> On Thursday, 12 August 2021 at 18:52:19 UTC+10, Nicola wrote:
> > On 2021-08-11, Derek Ignatius Asirvadem wrote:
> >> > Given your detailed question, AND the example you have cited, at what
> >> > point in time, do you suggest that the COUNT() would be correct
> >> > ? I am not asking for a long answer here, just clarifying you query.
> >> > A few words would be enough.
> >
> >> If transaction T1 is scanning a table to count its rows, and
> >> concurrently transaction T2 removes one row and adds two, the only
> >> values T1 should be able to compute are N (the number of rows in the
> >> table before the T1 and T2 starts executing) and N+1. That is because
> >> there are two possible serial executions:
> >
> > False. We are not serialised.
> >
> > We are discussing READ COMMITTED,

> Ok, I have misunderstood your question.

Um, I did not have a question. The question was yours. I was answering it. In your question, you raised some quaint notions, that we do not have [are not necessary in] the real world, that we have survived very nicely without, for forty years. I merely asked for clarification re one of those precious notions, which I needed, to qualify your question, in order to answer it..

> The above is a very informally
> stated definition of correctness of a schedule (i.e., serializability).

Yet another precious notion that exists only in fantasy land, not in reality, that we have done without, for forty years. And please note, I am not asking for a definition of that particular insanity, loose or tight, dressed up in a tuxedo or a clown suit.

> So, if you ask at what point in time a result of a query run at READ
> COMMITTED is correct, the answer is: in general, never.

Good (for you)

And therein the notion of “correct” and “point in time”, were yours, not mine. It is all about your question, which is so loose an floppy, and contains three stupid notions, that I was trying to answer, in your context, without destroying the silly notions. Don’t put it back on me. This item did not start with me asking you what is correct, it started with you saying that something in the real world is NOT “correct”, according to your fantasy of “correct”.

You are still not getting it. I have explained, in detail, how you are questioning reality, from an insane set of notions that exist only in fantasy (sure, collective fantasy, established and made “real” by hundreds of academic papers that engage in fantasy). Therefore very “real” to you. But to those of us who stand on the hard ground of reality, they remain the notions of drooling idiots in the asylum.

> Note that this
> answer doesn't rule out situations in which you can guarantee a correct
> result even at that level.

> >> 1. [S1] starts scanning the table, counting the rows, oblivious to
> >> other activity, by virtue that it declaratively runs at READ
> >> COMMITTED.
> >> 2. T2 executes and commits;
> >> 3. [S1] keeps scanning the table, without regard to other activity
> >
> > Which somehow is “incorrect” to you.
>
> Yes, according to the definition of "correctness" (which you affirm to
> reject) as equivalence to a serial schedule.

(I don’t understand what “affirm to reject” means, so my answer excludes that element. It sounds like each cancels the other.)

No, no, no. The “incorrect” is in quotation marks, meaning that it is incorrect according to your quaint notion of “correct”, AND by inference, that it is not incorrect otherwise.

So, [1][2][3] above is perfectly correct in the real world, where people are not in denial of the fact that the database is an online shared resource, that is constantly changing.

It is only “incorrect” where the person denies the real world, and concocts a hysterical notion of “correct:” where in he can observe the real world as “incorrect”.

> Sorry, what is the definition of "correctness" according to the
> Standards?

Again, the notion of “correct” is yours. We couldn’t care less about it.

Again, same as the academics that propose that the /Relational Model/ is somehow lacking because it does not define database design or how to tie your shoelaces, it is stupid to expect the Standard (which defines only the standard) to define anything but the standard. But academics do that all the time, as you do here. It is no less stupid. Perhaps you are unaware that we put men on the moon in 1969, and did so with mainframes and 8-bit minicomputers, that we have had real, fully ACID compliant DBMS since the early 1960’s, long before SQL. When SQL was released by IBM, we did not need the SQL Standard to define what correct operation in a database server is, or how to count rows. We did not find the SQL Standard lacking because it failed to define how to tie our shoelaces using an UPDATE command.

Again, as defined by the Mental Health Act, the insane cannot be responsible, so they impose the responsibility for their impotence, onto others.

It is not the business or the responsibility of the Standard to define what is correct in a particular database server, especially when they did not define what a database server is.

You are demonstrating SIX levels of INSANITY (stupid) here, which I destroy using Logic alone (science).
__ First, you concoct a stupid notion for “correct” that does not exist in reality (insane act).
__ Second, you apply it to reality and find reality “incorrect” (insane act). No one cares.
__ Third, when it fails to impress humans, you can’t give a definition (the definitions you do give fail, because it does not reference the reality that it is applied to).
__ Fourth, you can’t take responsibility for your own stupid notion (definition of insanity).
__ Fifth, you try to impose that responsibility for definition of your stupid undefined notion onto anyone who does not accept your stupid undefined notion (insanity squared).
__ Sixth, when that fails you say the Standard that does not define anything except the Standard, is somehow wrong for not defining something that is not in the Standard (ROTFLMAO insanity).

I do not use the terms such as /insanity/ lightly. Nor do I make declarations without evidence: you always provide evidence to support my declarations.

Hint: I can prove, with any good example of a reasonably complex query, and a stated set of circumstances, that the result obtained in Sybase is correct, AND that the result of the same query in Oracle is incorrect. But that gets into the operation of the server, and the expectation of the person writing the query. And the engagement itself changes that [prior understanding of correct] in the Oracle query writer’s min. If you are looking at my posts on TTM from eleven years ago, I did that three times, complete with examples and proofs, with people who were honestly trying to understand. (I did that dozens of times with Darwen and the slaves but due to their dishonesty, they abandoned the exercise so as to deprive me (science) of a resolution.)

Hint: Most people who have experience on both can categorically state that MySQL operates correctly, and that PoopDePooGres does not. They do not need /correct/ defined for them.

It is stupefying, idiotic, insane, to define “correct” as an abstraction without reference to reality, and then try to apply it to reality, but academics love abstraction with no relation to reality, it is the foundation for their fantasies. This is the core principle that caused the impotence of academics in this field for FIFTY years (no progress of the /Relational Model/, but hundreds of anti-Relational papers marketed as “Relational”). Such as your paper on Movie Titles, which was proved false in another thread.

Normal humans are quite satisfied with the definition of /correct/ from their favourite dictionary. We take any abstraction away from that as either criminal, or insane, or both.

It is the same for the lunatic notion of “serializsability” and “schedule” and their variants. In the real world, we are multi-threaded. “Serialisability” is only relevant for developers who are so stupid, they can only grasp single-threaded operation, and operation from a fantasy of a database “version” that was magically taken at a non-existent “point in time”, frozen, like their brains, that is offline; private; personal, that they can suck on, that does not move and shift to interfere with the desperate need for a long suck.

Yes, of course, we have Scheduling. For actual multi-threaded thread, dependent on the resources requested, and the entire current (real moment in time) workload and cache contents. Not the ham-fisted “scheduling” of requests or pages in a fixed “version” that never existed.

Those who are so intellectually crippled, insist that we who are not crippled, should be so crippled, so that OUR operation is “correct” according to cripples.

No, we do not have to be “correct” according to some lunatic notion of “correct”. No, we don’t have to be “serialised”, according to some knuckle-dragging notion of “operation”. No, we don’t need a crippled “schedule” to execute hundreds of threads concurrently based on the reality of each thread. No, we don’t need the safety and comfort of fantasy notions and the tiled cells that foster them. No, you can’t impose your insanity on the sane.
>>>>
Yes, you can and you have, which is why your fantasy such as “MVCC” is now a COLLECTIVE fantasy, the expectation of the unqualified “developers”.
<<<<
But not with guys like me.

You would have better luck if you get a prostitute to tell a virgin how she is not “correct”; how important and essential and glorious syphilis could be in her life; how she is missing out; how she is not really living. In case you do not understand that, hint: the virgin lives from her spirit and intellect; the prostitute lives from her body alone, in denial of her spirit and intellect. /Look, you could have oozing sores and scars that are permanent, that make you unique, even after you die you will be unique, your bones will be marked permanently./ Not impressive. The virgin knows (knowledge; logic; intellect; spirit) that she is unique because she is created as such, not by mere marks on her skin or bones.

You need to get some experience in the real world, such that you do not expect a Standard to define what “correct” is. Give yourself ten years or so, and your question will have evaporated.

OTOH, if you wish to remain an academic idiot, full of irrelevant fantasy notions, since you are pushing them, you will have to define them. Take note, if the definitions do not reference reality, they will not apply to reality, they will be dismissed.

----

Deadlocks. You say “deadlocks can’t happen in MVCC [MV-non-CC+2PL]” as if it is superior. It isn’t. Relative to the real world, it is inferior. The fact is, you can’t even define what a deadlock is. Fact is, if a deadlock did happen in reality, your schizophrenic “version” of the frozen database COULD not know about it. Fact is, the rough equivalent of deadlock in your primitive herd of programs does happen, it is called “serialization failure*, proving that you can’t even serialise what you say is serialised. You can’t make this stuff up.

PigPoopGres. Fact is, not only does MV-non-CC not work at all, until you add your primitive 2PL, but that it still does not work at all, until MANUAL LOCKING is implemented in the app code. You academics are so stupid, you do not realise that that (allowing app code to interfere with internal locks) (a) guarantees disaster , and (b) guarantees the possibility of deadlocks due to less-than-perfect app code. It is the same irresponsible principle as you are attempting here: you can’t provide the service that you declare you provide, so you impose that provision of service onto the developer.

In that issue, MySQL is superior. Yes, it is MV-non-CC+Lock Manager. Yes, it does not work because MV-non-CC does not work, and MV-non-CC+Lock Manager does not work. Yes, they have to have additional locking. But not being as insane as academics, they allow only external named locks, such that it does not interfere with the internal locks. So at least MySQL can guarantee that their herd of programs does not deadlock, and if a deadlock did happen, it is in the app code alone. Far superior to pig poop.

Fact is, we don’t serialise, so truly, we cannot have serialisation failures. Fact is, we don’t need a frozen “version” of the database, we operate in real time, on the single, online, active, version of the truth, so we don’t need an idiotic, primitive schedule, we truly cannot have schedule failures.

Understood properly, yes, deadlocks do happen in the real world, in every walk of life. Fact is, since you are divorced from reality, sucking on an offline frozen “version” of the database, the accurate statement is, in MV-non-CC, since the “versions” are artificially offline, the recognition of real world deadlocks is artificially prevented.

It is not so relevant that our real world database servers are superior (yes, they are), the proper understanding is, reality is relatively superior to the inferiority of perceptions of reality by the insane; the crippled; the non-functional; the impotent; the bat-poop crazy.

----

> >> Count() would return the correct result if and only if the returned
> >> value is among the values that some serial execution of the same set of
> >> committed transactions would have returned.
>
> > I reject that as a definition, Sybase; DB2; MS rejects that as well.
>
> Then, again, please provide a definition of "correctness". Because you
> want to be confident that your queries produce "correct" results, don't
> you?

No.

The moon has not fallen into the river, and I don’t need your help to fish it out. And I am certainly not about to give you a definition of what a correct moon in the river looks like.

Yes, I have confidence, but that confidence is based in reality, not in abstraction.

My confidence that a certain query is, or is not, correct [no quotation marks] is NOT based on some hysterical abstraction, which does not reference the real world elements that it supposedly evaluates, it is based on the real world circumstances that the query operates in. If you want your idiotic abstraction to apply to reality, it has to be an abstraction *OF* reality. It is because your abstraction is a result of circle-jerking over fantasy that is proudly detached from reality, that it does not relate to, it is not applicable to, reality.

----

In the real world, we are not so crippled as to require “serial execution” in the multi-threaded server (multi-threaded at all levels). Only academics, who consistently demonstrate their abject disability, their sub-human intellect, their inability to grasp technical subjects, their divorcement from reality, that can only write simple single-threaded programs, that need a “serial execution”.

Here, let me help you get out of your own mess, I don’t know if it will work, because I don’t have the necessary qualifications. You still have the hysterical notion of MV-non-CC, as the basis; the foundation, of your idiotic beliefs. That is a snapshot of the entire database as a “version”, taken at an abstract “moment in time”, that never existed (NOT like taking a hardcopy), for single-user use, the only thing you can understand, because you cannot handle the reality of an online database that is in the constant state of change. Great for simpletons. But in the real world, reality, there is no such thing as a snapshot of the database, no such thing as that /moment in time/. We do not freeze the database and take a photo, only cripples do.

No problem, I accept that you are intellectually crippled.

But then you try and impose that sub-human disability on the real world, wherein databases are not a snapshots taken at a mythical moment in time. I have a problem with that.

----

We get “developers” who have such hysterical notions all the time, they have false confidence because they say the academics said so. We have to train them by introducing reality to them. Those who are not totally crippled love it, and become useful over time. The others leave quietly, or throw a tantrum, or commit suicide later.

----

> Since anything can happen concurrently during the execution of a query
> at READ COMMITTED, how can you tell apart the result of a query
> returning an integer from
>
> select cast(rand() * 1000000) as int)
>
> ?

I understand the SQL, but I have read that question several times, but sorry, I do not understand your question.

----

> > Wherein COUNT at READ COMMITTED works perfectly.
>
> Ok if you are referring to other ways of counting.

No, since I referenced “COUNT”, I am referring to reading all the rows in a large table, per the SQL requirement, NOT other ways of obtaining a /count/ of the rows, which would be more efficient.

> But if you mean that
> counting by scanning the table at READ COMMITTED works perfectly,

Yes.

> please
> elaborate on what "works perfectly" means.

Why ?
For what purpose ?
You yourself just referenced “anything can happen concurrently during the execution of a query”: if you understand what you wrote, you will understand that your question is stupid, it does not apply to reality.

If I try to answer your question, I would have to explain:
a. reality
b. how a database server in the real world works, and
c. how a particular database server works.
That is called education.

As evidenced, I spend an awful lot of time and energy helping you, because you alone have attempted to cross the great chasm between academic isolation from reality, and reality. But no, sorry, I cannot provide basic education. You have down loaded SAP/Sybase Express Edition. Go and get educated yourself.

In short, “works perfectly” means works perfectly in all the circumstances that I have executed the command, across about 120 database servers that I have experience with, in the real world. Which are multi-threaded, and have configurable resources at every resource level, and share those resources, and resolve contention, without error. All of which you are, as evidenced, totally ignorant of, which is why you depend on primitive abstract notions divorced from reality, that you CAN understand.

> > But there is an important second point: anyone who has even pedestrian
> > knowledge of SQL on a genuine SQL Platform, would know that [separate
> > to the count changing constantly because the table is active], that
> > that is not a “problem”, that that is not the way to obtain a COUNT on
> > a large table.
>
> Sure. But it still illustrates the point, which is about the kind of
> issues you may run into when running queries at READ COMMITTED.

No. You TRIED to illustrate some idiotic point, but you FAILED. The entire attempt, each element, was hysterically stupid, and I destroyed each element of your stupidity.

> It is
> not suggesting that you *should* run those queries at READ COMMITTED.

Sometimes we do. Sometimes we don’t. It depends on what we want, which includes [does not deny] the reality of the server; and the activity; and the accuracy that is required for the task.

> Counting was not the best example; but you can replace the query with
> some other computation.

No. You are prosecuting the insanity, the onus is upon you to define the “problem”. It is you that has the “problem”, you define it, you provide an example. From my side, not being intellectually crippled, (a) I don’t have the “problem”, and (b) in response to your attempts to impose your insanity on us, (c) I have rejected each element of your “problem”, purely by Logic (science). You are as unscientific as a certified lunatic.

No, the moon has not fallen into the river. Our necks are not broken and we can look up, skyward. No we don’t need your “method” to fish the moon out of the water. It is up to you to define your moon in the water, yellow or pink or blood.

Cheers
Derek

Message has been deleted

Derek Ignatius Asirvadem

unread,
Aug 14, 2021, 10:02:07 PM8/14/21
to
Nicola

> On Thursday, 12 August 2021 at 16:08:28 UTC+10, Derek Ignatius Asirvadem wrote:
>
> Make no mistake:
> __ Truth = Reality = life
> __ Falsity = Rejection of Reality = death
>
> __ Sanity = conforming the intellect to Reality
> __ Insanity = desperately try to conform reality to the mind, which after a lifetime of failure, ends in suicide.
>
> Academics foster death. It is evil.

This is the Theological and Philosophical (when it was a science, before it was sabotaged by drug addicts and other insane) definition. It is science; logic. And a bit more qualification:

__ Truth = Reality = life
____ Permanent (it is eternal, but you wouldn’t understand that)
__ Falsity = Rejection of Reality = death
____ Transient, ever-changing

__ Sanity = conforming the intellect to Reality
__ Insanity = Rejection [denial] of Reality
____ erection of fantasy, desperate attempts to conform reality to the mind (their fantasy), which after a lifetime of failure [usually short], ends in suicide or murder-suicide.


> On Friday, 13 August 2021 at 13:37:40 UTC+10, Derek Ignatius Asirvadem wrote:

> It is stupefying, idiotic, insane, to define “correct” as an abstraction without reference to reality, and then try to apply it to reality, but academics love abstraction with no relation to reality, it is the foundation for their fantasies. This is the core principle that caused the impotence of academics in this field for FIFTY years (no progress of the /Relational Model/, but hundreds of anti-Relational papers marketed as “Relational”).

> Those who are so intellectually crippled, insist that we who are not crippled, should be so crippled, so that OUR operation is “correct” according to cripples.

With the release of mental patients into society, and the destruction thereof, sure, you have many friends, and there are now quite a few famous examples of the problem I describe (points above).

We have a 56yo male, who dresses in a skirt and wears a wig. Ok. He says he “identifies” as a 6yo girl. Ok. If I had any official duty of care upon him, I would that that act, in and of itself categorises him as insane, suffering from sex dysphoria (more, later), and have him locked up, in order to prevent him harming himself (self-harm and suicide are the province of the insane), and to preserve society from the contamination of insanity (mayhem and murder, and more commonly insane “thinking”). Up to a few years ago, that is what would happen.

But now, the globalists have forced enslaved countries to release the insane into society, and to accept the notions of the insane. Thus the insane have their insanity validated. Worse, they get together with other insane people, and validate each other, they form a COLLECTIVE insanity, a COLLECTIVE fantasy against reality, in the manner of populism in their “community” it is “real” to them.

>>>>

“Gender” is one such. In reality, there is sex, not “gender”, and sex is exclusively male/female only (don’t argue in the tiny corners of the Bell curve, citing deformities, I will destroy them). Every cell in a person’s body is male or female only, because sex is established at the chromosome level. When a male castrates himself and mutilates his genitals to form “female” “vagina”, the “organ” remains male because every cell is male. Likewise, when a female has genital mutilation performed, the “organ” is completely made up of her female cells. Unscrupulous doctors will happily perform the mutilation for them: chink; chink; chink. The Chinese are known to perform “hysterectomies” on such males.

“Gender” is a fantasy, proved further by the fact that it is forever changing. 104 “genders” on Fakebook at last count. In this fantasy, “gender” is “assigned” at birth. No, actually, the first cells are created at conception, not birth, and all the progressive cells are formed during the entire nine months preceding birth. The event related to sex at birth is observation of reality, not an “assignment”. Sex is real, therefore it cannot be changed. The psychological pain they feel every time they look in the mirror is the tension of opposites, between their fantasy and reality. That is internal, that is what needs to be treated, not the body (drugs and mutilation). In their pain, they cannot imagine that, because falsity is ever-changing, they might want to reverse the “transition” in the future, and they can’t “transition” back, because it is permanent.

When a person rejects their sex, it is sex dysphoria, a severe mental illness, not “gender dysphoria”. Their proposal that our rejection of their fantasy causes their high rate of suicide is false, no, it is their mental illness that causes their suicide. If we accept their proposal, we have validated their mental illness; their fantasy, and contributed to their suicide. If we remain responsible we recognise their mental illness, and give them the help they need, so as to prevent their suicide, as well as their murder and mayhem in human society.

<<<<

In the case of the academics who allege that they serve the database space, the fantasy they hold against reality is strongly established, it has FIFTY YEARS of maturity and populism within it. It is no less a mental illness than sex dysphoria. I do not use clinical terms re mental illness lightly, they are precise.

Young people are indoctrinated into the faith, and thus made insane. It is a de facto faith because it is blind faith, belief in a fantasy, that has no evidence in reality, no reference to reality. Stonebraker is the great prophet, that delivered the “MVCC” myth, the fantasy. That is bad enough, but that needs a denial of reality, otherwise psychological pain ensues. So the denial of reality, schizophrenia, is part of the faith; part of the indoctrination, and the isolation from reality; from the industry; from the real world, is made up into an elitism for the chosen ones.

Isolation from reality, which is mental illness by definition, is re-framed as a badge of superiority.

There is no such thing as a superiority complex: a truly superior person does not need constant validation, he knows he is superior. Properly understood, it is an inferiority complex, the inferior need to present as superior, and they have to do it constantly. Observe the Chinese posturing.

Of course, they propose their fantasies as “science”, but it is easily determined as pseudo-science. Science is truth, simple, eg. the single paper for the /Relational Model/, no citations necessary, it changed the industry. Pseudo-science consists of a mountain of papers, needing citations (populism) by the cult members, both pro and con, endless argument with no resolution, and has had zero effect on the industry.

As with any cult, one needs to drink the Kool-Aid, one has to sing the hymn to drown out reality, the mantra “readers never block writers, writers never block readers”. It is a particularly sick cult, because they themselves implement a system in which readers block writers, writers block readers. So there is a constant tension of opposites, a constant psychological pain.

As the mentally ill cannot be responsible, in their state of defined victim, they impose their disease; their sickness; their pain onto the sane. As seen here [paraphrased} “it is you, Derek, who is crazy, because you don’t love and respect my schizophrenic no-reference-to-reality notion of /correct/ ... no ? ... ok, then, you define /correct/ in schizophrenic terms, without any reference to reality ... no ? .. ok, then, I will kill myself.” No, fool, leave the contrived isolation of the academic asylum, come out into the sunshine, and enjoy reality. It won’t kill you.

Isolation from reality, as declared with pride, eliminates the possibility of serving the industry that they allege to serve. Theoreticians in the motor vehicle or construction or computer hardware industries are not isolated from the industry they serve. But here, we are guaranteed both: isolation with pride and zero service.

In contrast to the theory divorced from reality, as proclaimed, which is fantasy, the entire faith of the cult, the only theory that has value, is that which relates to reality: abstraction based on reality. As evidenced in the progress of SQL platforms.

So it is no surprise at all that in reality, all progress in this science has been made by real scientists, employees of the great DBMS platforms. In the Relational paradigm, that started with Dr E F Codd. That is why the cultists; the academics, hate him, that is why they endlessly demean him, that is why they market anti-relational filth as “relational”, that is why they sabotage the industry. It is the normal activity of the insane. That is why heavy marketing is implemented, both in the mountain of “academic papers”, and the media, it is pure propaganda for the faithful, and a vehicle to attract; seduce, the ignorant: “Here, you don’t have to think about what an online shared database is, just perceive it as a snapshot, frozen in a moment in time”, “here, write ‘transactions’ as if you are the only user changing that snapshot.”

----

The problem has been made worse recently, because of the rise of freeware, which is analogous to the rise of revolution ideology. Of course it is false, and that is why it has to be heavily marketed. It doesn’t work at all, but the marketing says it is better than the SQL Platforms, that a non-architected non-SQL non-platform based on non-reality, is somehow better. You get what you pay for.

It is so bad, that even normal people are infected with the blind faith of “MVCC”, blind to the fact that it is not “MVCC” but MV-non-CC+2PL. They parrot “‘MVCC’ as superior”, it has become a check-box issue for the SQL Platform providers.

No problem. As SQL Platform engineers, they are grounded in reality, they are not stupid academics. As engineers, they are quite aware that MV-non-CC+2PL is not a substitute for their mature Concurrency Control. Further, we do not want the customers with their high concurrency requirements (95% of the banking & finance vertical) to be affected by catering for new customers who think the moon is made of cream cheese.

>>>>

This is a repeat of the same insanity; the same disease; the mental disorder, manifesting in a different technical issue, that we were pressed with in 1999-2000, and we dealt with it with the same structure of solution, so let me mention that first. Then the academics and Oracle were pushing the insane notion that contention (which is real and has a perfect real-world solution), was caused by the fact that lock granularity in Sybase; MS; etc, was pages, not rows, and that if only, if only, if only, Sybase would implement row level locking (which the fantasy world, had but did not provide high concurrency), then it would be attractive to customers whose program were written by idiots. Because the academics said so. Note that these systems were physicalised Record Filing Systems (Record Ids), not logical Relational databases (composite keys).

Up to then, the page was the unit of measure for everything, and locking was ALLPAGES, meaning index and data. Ok, major new release, first to provide improved support for Record Id based files (they are files, not tables), and second to support row level locking. No change to the architecture, just an additional DataStructure, and new lock types. In addition to DATAROW level locks, which locked only the rows, and used new latches for index and data pages, Sybase provided an in-between lock level: DATAPAGE, which locked pages, and used latches for index pages. You want fast Record Ids and row locking, no problem.

The result ? As predicted by scientists before the fact. No change whatsoever to high-end databases, guys like me have never used row locks, we are still on 2kb Pages. I pleaded with people who had sub-standard systems, DO NOT CONVERT ALL YOUR TABLES, CONVERT ONLY THOSE NECESSARY. Whoever did convert, found that, gee whiz, instead of being lock-bound at the page level, they were now lock-bound at the row level. And oopsey doopsey, instead of number-of-locks in the 100K range, they were now in the in the 1M range (10-20M range for those who converted all their tables).

At least they STFU and stopped blaming the server. They would not listen to decades of advice:
- all contention is written in the application & database,
- it is not in the server (that deals with and resolves contention)
- sure, the contention is visible as locks,
- so the stupid think that the problem is in the locks (no responsibility, transference, again)
- no, the problem is in the cause of the locks, the app code, the transaction design
Separately:
- the problem is not lock granularity
- the problem is lock duration

No change whatsoever to high-end systems, for guys like me. We don’t need row locks to fix a problem we did not have. We don’t use physical Record Ids, because everything in the database is logical. Physicalising the logical /Relational Model/, as academics have done for FIFTY YEARS, is pure sabotage, pure evil. But hey, they need it to justify their “theory”, their shell-game to “find” the keys.

Hopefully you will understand the relevance of this thread, the relevance of an OLTP Standard, that is designed to minimise contention. Note gravely, it is not in the server, it is in the database and app code.

<<<<

Second innings. Same insanity; same stupidity; different technical issue.

Sybase ADDITIONALLY supports multiple versions of offline pages or rows which schizophrenics say they want.

For normal humans: 1PL only
For the insane: 1PL + snapshot isolation
__ (not the reversal or order, as in MV-non-CC + 2PL)

Without the smallest change to the architecture, with only the additional resources to cater for the special-needs children, which are fully configurable (so that only the errant Dbs and connections are affected), Sybase can guarantee correct handling (do not ask) of multiple versions. And instead of the fraudulent label, let’s call it what it is: SNAPSHOT ISOLATION.

You wanna create a private offline snapshot of a row ?
No problem, have one.
Meanwhile, the entire server and the rest of the users on the database execute with 1PL only.
Dig for gold in the nasal cavity.
You wanna nuther private hardcopy of the online row that changes constantly ?
Here, have a snapshot.
Meanwhile, the entire server and the rest of the users on the database execute with 1PL only.
Dig for gold elsewhere.
Oh, now you wanna commit, do you ?
Sure, let me resolve those offline snapshots against the online database for you, using the unchanged concurrency control and resolution manager.
Sorry, you are three days late.

It literally materialises the fantasies.

Normal boys and girls unaffected. Special needs children handled without telling them they are retarded, with new error messages that apply only to the insane.

Functionally, DB2, MS do the same. MS, as usual, has the MS bloat.

----

That is perfect for an SQL Platform provider, no argument from me.

But that is horrible for a system supplier, such as I. Why, everything works and everything ? Because it handles the symptoms, but not the cause, and because knuckle-dragging developers are validated in their knuckle-dragging, and never get past it. They will continue to hold the fantasies, continue to write filth, at great additional cost, and never know how the real world of databases work. And it does not change their path to suicide.

I am responsible. So I give the education, destroy their stupid religion, and offer them conversion therapy, and a chance for life.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 15, 2021, 6:14:13 AM8/15/21
to
As a result of the exchange with Dan, I have updated the Transaction Sanity doc, to be generic SQL (eg. readily MS SQL which is the most common), and removed the SG specific notes. Of course, that excludes the freeware and Oracle, which are not SQL-compliant.

To be fair, it must be mentioned that MySQL is the least SQL-non-compliant, and it does not have the vulnerability of the gamut of locking problems (such as being wide open to deadlocks and app code interfering with the internal locks). I have not examined it recently, but it certainly appears that it is possible to maintain ACID compliance in MySQL, whereas PissGres and Oracle do not provide ACID in the herd of programs ("server"), and thus it is not possible to make the app code ACID compliant.

__ https://www.softwaregems.com.au/Documents/Article/Database/Transaction/Transaction%20Sanity.pdf

Nicola

Are there any facts, in my Transaction Sanity doc, p1 and p2, that you dispute ?

Cheers
Derek

Nicola

unread,
Aug 16, 2021, 5:29:04 AM8/16/21
to
On 2021-08-15, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
> Are there any facts, in my Transaction Sanity doc, p1 and p2, that you
> dispute ?

How could I dispute facts?

Thank you for that document, and for the whole discussion around ACID
and transactions. It has made me understand how you use those terms (as
opposed to their "textbook" use), which in turn, has helped me clarify
many aspects of your critique of MVCC.

I'll gladly follow your developments on Dan's data model, if there are
any, and think how they (fail to) apply to MVCC-based systems.

Nicola

Derek Ignatius Asirvadem

unread,
Aug 17, 2021, 2:31:35 AM8/17/21
to
Nicola

> On Monday, 16 August 2021 at 19:29:04 UTC+10, Nicola wrote:
> > On 2021-08-15, Derek Ignatius Asirvadem wrote:
> >
> > Are there any facts, in my Transaction Sanity doc, p1 and p2, that you
> > dispute ?
>
> How could I dispute facts?

Good to hear.

> Thank you for that document, and for the whole discussion around ACID
> and transactions.

You are welcome.

> It has made me understand how you use those terms (as
> opposed to their "textbook" use),

As you can see, I do not have private terms or re-definitions of terms, I use the definitions that have been established in the industry, in chronological order (SQL, genuine SQL Platforms of forty years). I trust you appreciate that technical terms are established in order to facilitate correct communication between people. When the academics use terms to mean different things than the established terms:
a. it is an act of dishonesty,
b. it guarantees that the communication will be laboured and confused (which is a substantial component in the volume of our discussion)
c. it is the base they use to promote some primitive and non-compliant feature as the feature (eg. anti-SQL as “SQL”; anti-ACID as “ACID”; etc)
d. their students are trained in such falsity, which breaks down when they are employed in industry

> which in turn, has helped me clarify
> many aspects of your critique of MVCC.

Great.

Perhaps some day, you too, will admit the utter falsity of MV-non-CC, the fantasy, and thus the insanity of perceiving an online shared active database as something, anything, that it is not, let alone a snapshot frozen in time, which is impossible. And the terrible consequences thereof.

> I'll gladly follow your developments on Dan's data model, if there are
> any,

Assuming you mean the data model relevant to this thread, the goal being Optimistic Locking, and showing the progress of the OLTP Transaction Template code, the GitHub Gist is here. This contains the DDL and obsolete stored proc code:
__ https://gist.github.com/DanielLoth/76d241515655e76cadddef6ed2d373aa

My submission, the latest version of code against that db, is here. It is now generic code for SQL-compliant Platforms (Sybase; DB2; MS; and Informix):
__ https://www.softwaregems.com.au/Documents/Article/Database/Transaction/Visit_Add_tr%20DA.sql

Because it is for Dan’s db, it does have some nuances: if it is at all confusing, ie. you don’t clearly see the difference re Template vs nuances, let me know and I will post code for the db in the Transaction Sanity doc.

If you mean the data modelling exercise for the Shooter db, please post to that effect. I don’t know how far Dan is interested in taking it. That is in another GithHub “Pull Request”.

> and think how they (fail to) apply to MVCC-based systems.

After having reached the status that you have, per your post, it would be interesting if you comment on that. The MV-non-CC plus the manual locking (you said “I love 2PL”) that you have to do vs the Lock Managers in real SQL platforms, that wouldn’t dream of allowing the user to interfere with locks.

Cheers
Derek
Message has been deleted
0 new messages