Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

MVCC, Advantages & Disadvantages

327 views
Skip to first unread message

Derek Ignatius Asirvadem

unread,
Mar 14, 2020, 4:02:27 AM3/14/20
to
Nicola

----------------------------------------------------------------------
First, I will give the whole context, for anyone interested.
----------------------------------------------------------------------

> On Saturday, 14 March 2020 00:11:23 UTC+11, Nicola wrote:
> > On 2020-03-08, Derek Ignatius Asirvadem wrote:
> > > On Friday, 6 March 2020 10:19:08 UTC+11, Nicola wrote:
>
> > > Research on physical optimization is still hot nowadays.
> >
> > There is no genuine scientific research being done in this area, and
> > there has not been since 1984-1987. All genuine research that is done
> > (including that 1984-1987 content,) is done by commercial enterprises
> > such as Sybase and IBM. Codd was a product of precisely such
> > a venture, and he is the only one outside the platform suppliers who
> > gave theory that was based on practice.
>
> That sounds like the adage: "things were better in (my) old times". That
> a lot good deal of fundamental research was done in the '70s and '80s,
> especially by "big names", is a fact, no question. But today's computers
> are not '80s computers, and there is a lot of research that is done and
> that has to be done yet. Take a look at VLDB or at
> https://15721.courses.cs.cmu.edu/spring2020/schedule.html.

It appears that that proves my case more than it does yours !!!

>>> CMU 15-721
This course is a comprehensive study of the internals of modern database management systems. It will cover the core concepts and fundamentals of the components that are used in both high-performance transaction processing systems (OLTP) and large-scale analytical systems (OLAP). The class will stress both efficiency and correctness of the implementation of these ideas. All class projects will be in the context of a real in-memory, multi-core database system. The course is appropriate for graduate students in software systems and for advanced undergraduates with dirty systems programming skills.
<<<

That is how to teach young minds how to be stupid. Exclude actual modern database management systems such as Sybase; DB2; MSSQL, and include only pieces of pig poop such as PusGrossNONsql. Stonebraker is right there at the top, proving again, that academics love academics and have no clue about the world outside academia. Invent a wheel from scratch, in isolation from the real world. Then spend thirty years getting it to work. It is based on the elevation of personal power and capability, in an ignorant state, in stead of real education, which [latter] would eliminate the thirty years. The very definition of Modernist "education".

> > The freaks that come up with "physical optimisation" are re-inventing
> > the wheel
>
> Sure, that happens all the time. But from time to time some new very
> good ideas pop up. That's how research works.

Name one. In this field. Please.
(There are genuine advances made by genuine research in other hard sciences. Eg. biology has proved Darwin's theory false [separate to the fact that there has never been any proof], using Darwin's own measure of proof.)

The point I was making is, that all, a full 100%, of the progress in database science comes from the engineers of the large commercial SQL platforms, same as in the 1960's and 1970's, when absolutely all the progress in *DBMS and database technology came from them. Codd is included in that, because he was employed by IBM and given a commissioned project. Never from an academic or theoretician.

What academia does, what they have produced in FIFTY Years (since the /RM/), using their Modernist "science", is a big fat zero.

Not one single paper that articulates the /RM/.
Not one single paper that progresses the /RM/.
About 120 papers that propagandise RFS [RM/T} as "relational".
We call that sabotage and regression, you guys call it "science".

> > in their isolated ivory towers, purposefully ignorant that
> > (a) the wheel was invented thousands of years ago, and (b) perfected
> > for particular purposes.
>
> "For particular purposes" is crucial. The wheel may be significantly
> un-perfected for other purposes, unforeseen decades ago, but relevant
> nowadays.

Absolutely.

-----------------------------------------------
Now for the context for this thread
-----------------------------------------------

> > They are doing "research:" about what we
> > have had since 1987, and "inventing" it. That is what happens when
> > clueless "theoreticians" who are addicted to pig poop find out that
> > their "theories" do not work in the practical world, and then take
> > THIRTY YEARS to come up with a new fix-it "theory".
> >
> > MVCC is a great example of such monstrosities. Beloved of the
> > "academics" who, all 10,000 of them spread across the planet, have no
> > clue that they are breaking scientific principles, and who "invent"
> > [copy, badly] what Oracle had that does not work.
>
> MVCC has its drawbacks and some advantages, especially re concurrency
> and performance, compared to 2PC. Systems that implement MVCC sometimes
> do also provide explicit lock mechanisms for the situations where
> a 2PC-like behaviour is required. The consensus seems to be that such
> applications are a minority and for the rest MVCC is adequate. If you
> want to discuss this topic further, please move it to a new thread.

Ok. New Thread.

To the best of my knowledge (44 years in databases; Software Engineer for Cincom; the last 34 years as an independent boutique consultancy specialising in Relational Databases, OLTP and OLAP from the one database, writing Version 2 systems that are two orders of magnitude faster that Version 1, including hundreds of benchmarks for OLTP against the competition):

1. MVCC is based on a false premise.
Precisely, it breaks one of the fundamental principles of an online database. Anything that derives from that is going to be nothing less than filth,

2. MVCC has no advantages
(because we are using a comparative word) over the alternative which is Two Phased Locking (2PL). I will grant that academics are clueless about 2PL, so they make grandiose declarations about MVCC being better than what they do not know. Much like a kid who says that masturbation is better than sex, while never having actually experienced sex.

Advantages or disadvantages have to be specified in scientific terms. That means
- resources required for establishment
- resources required during execution
--- meaning the volume of locks/pages that have to be processed in order to resolve contention (or to maintain the fantasy of the private view of the database), and the resolution tehreof
- has to include the Transaction Log file (or equivalent)
- has to include locks or equivalent pages
- in a relevant scenario, meaning OLTP under pressure
--- if you want a specific one, say 200 tables; 5TB database total; 10 tables 100GB each to be the focus; 1000 online connections; 100 contentious [competing] for INSERTS on those 10 tables

It is particularly stupid in design (and everything that follows), because in addition to [1], iit was designed decades AFTER 2PL, which is we have had since 1960, IBM 360/CICS/TCP, and implemented in every commercial RDBMS platform ever since.

FYI, Oracle uses MVCC only, but it is not SQL; not Relational [poor support]; and certainly not a platform in the sense of a Server.
http://www.softwaregems.com.au/Documents/Article/Oracle%20Circus/Oracle%20vs%20Sybase.pdf

Pig poop, often called freeware; vapourware; noware; open scourge; open sewage, such as PusGrossNONsql has exactly the same non-architecture, and MVCC only. You get what you pay for. 10,000 "developers" spread across the planet, "working together" [har dee har har], inventing the wheel from scratch, in purposeful ignorance of wheels perfected for this purpose, that we have had since 1960 (IBM CICS TCP) and 1984 (Sybase).

The only reason MVCC even exists is because academics, in their vociferous ignorance, love what other academics invent, in pathological denial of reality (commercial engineers working for RDBMS suppliers, 2PL). Heavily promoted by the Stonebraker crowd (PusGrrrNONsql is the bastard son of Ingrrrres, which did not work either), and the Date; Darwen; Fagin; et al Gulag. One has to be mentally enslaved into the academic mindset, to even contemplate it without ROTFLMAO.

Second, they love thibgs that do not work, because it allows the possibility of it working in some decade in the future. Note that the colloquial definition of insanity is doing the same thing, over and over again, expecting a different result each time.

3. MVCC has only disadvantages,
compared with Sybase/2PL or DB2/LUW (slightly different implementations of 2PL), and they are great disadvantages,

That is my starting position, and of course, I can support any item with evidence. Since you made the claim, I will let you enumerate the advantages and disadvantages first, and I will reply.

I will admit that while I am an expert in Sybase/2PL (expert in configuration & performance overall), I am not that expert in MVCC, I have only a cursory knowledge gained (a) from the literature [many intermissions to throw up], and (b) a working knowledge from the many benchmarks against competing "products" that used MVCC [many discussions while waiting for theirs to finish, in the many hours after mine finished].

Cheers
Derek

Message has been deleted

Derek Ignatius Asirvadem

unread,
Mar 14, 2020, 4:39:04 AM3/14/20
to
> On Saturday, 14 March 2020 19:02:27 UTC+11, Derek Ignatius Asirvadem wrote:
>
> > On Saturday, 14 March 2020 00:11:23 UTC+11, Nicola wrote:
> >
> > MVCC has its drawbacks and some advantages, especially re concurrency
> > and performance, compared to 2PC.

> 2. MVCC has no advantages
> (because we are using a comparative word) over the alternative which is Two Phased Locking (2PL). I will grant that academics are clueless about 2PL,

Two Phased Commit (2PC) is a totally different method and implementation. Where the Transaction (in a single client of course), has to modify resources that are resident in more than one SERVER, as a single Transaction, a single Logical unit of Work. 2PC is really a protocol or method, for the synchronisation across multiple servers that is required to obtain that result. The app has full control, it to make the calls.

Whereas 2PL (and MVCC hopefully) is completely internal to the single SQL server, and relevant to the resolution of resource contention for multiple active Transactions.

Two Phased Locking is not a very good name, but it is the name given to it by academics decades after the fact of its invention. Before that, during those decades, we just called it THE Lock Manager. There was only one way to resolve resource contention in those decades. (In 1981, when I was at Cincom, and intimately familiar with their locking strategies, I wrote a compete ISAM filing system for Wang 2200 MVP, with full locking, such that it allowed OLTP Transactions. Neither Cincom nor IBM nor I called it 2PL)

The "two phases" are internal to the server, and rarely understood except by capable performance tuners. They do not need exposure for this discussion.

Also not to be confused with Optimistic Locking, a demand of OLTP Standards, which is completely within the control of the DBA/Modeller (facilities must be placed in the created database) and the app developers (they must use the Standard OLTP Transaction Template). Although this is not essential to the MVCC vs 2PL discussion, this may need to be exposed because one needs to understand the way OLTP Transactions have worked, unchanged, since 1960, before one can appreciate the efficiency of /any/ resource contention mechanism.

And the lack of which (abject technical ignorance of ACID Transactions and the related requirements), is the motivation behind the academics push for MVCC in the low end of the market. They have this insane doctrine (dogma?) that they don't know, and they don't need to know, because God (the machine that has not been invented yet) will fix everything. Some day.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 24, 2020, 10:27:14 PM7/24/20
to
I was recently asked a question about 2PL and the seeker wanted clarification, based on a reference to the ever-changing cesspool known as wikipedia. So I had to arm myself with an air sickness bag and read the damn thing.
____https://en.wikipedia.org/wiki/Two-phase_locking

God help us.

What a totally confused and self-contradictory entry. Without any reference to the real world methods actually provided and actually in use in RDBMS platforms, for over THIRTY YEARS. Ok, so the problem is that space is totally confused, and that is all that the "academics" and "theoreticians" that fill this space with pig poop know.

In order to maintain the scope of this thread and afford a fruitful discussion (whenever you are ready, Nicola), let me clarify that confusion.

1. <<Ordinary Locking>>
(not 2PL but commonly called 2PL) is required for OLTP (known by the imbeciles who are ignorant of OLTP as "low contention" or "high concurrency"), in any platform (HDBMS; NDBMS; RDBMS).

2. That means an <<Ordinary Transaction>>
in a single Server, resolving multiple concurrent Transactions. NOT a Distributed Transaction, that is distributed across multiple Servers.

3. Ordinary Transactions are <<ACID>> (since 1960).
That means the Server provides ACID, in order for the developer to code an ACID Transaction. (It is not possible to code an ACID Transaction on a non-ACID Server.)

4. A <<Distributed Transaction>>,
that is distributed across multiple Servers, requires (a) ACID in each Server, and (b) coordination between the multiple servers.

5. Because each Transaction in each Server is ACID, and Ordinary Transactions are closed with a COMMIT, the coordination protocol is called <<Two Phased Commit>>.

6. Two-Phased Commit does *NOT* require <<Two-Phased Locking>>. They are completely unrelated, but often confused because the idiots think that since the first two of three words is the same, oo oooo, the concept must be the same.

7. <<Two-Phased Locking>
not Two-Phased Commit, is required for Version-based program suites (none of them are servers by any stretch of the imagination) that provide some SQL features and fail to comply with the SQL Standard, such as Oracle and all the freeware/vapourware/nowhere suites, in order to provide some semblance of concurrency control (some fraction of [1] ). As evidenced here [corrections are mine]:

> On Saturday, 14 March 2020 00:11:23 UTC+11, Nicola wrote:
>
> MVCC has its drawbacks and some advantages, especially re concurrency
> and performance, compared to 2PC [2PL]. Systems that implement MVCC sometimes
> do also provide explicit lock mechanisms for the situations where
> a 2PC [2PL]-like behaviour is required. The consensus seems to be that such
> applications are a minority and for the rest MVCC is adequate.

(I hasten not to digress, but this needs to be said. Consensus is not science, science is about facts, it does not need consensus. The consensus is false. The consensus is ignorant of the real world, where all applications require OLTP (Ordinary Locking). MVCC simply does not work. It does not work because
a. it denies the reality in the real world of databases,
b. it breaks the fundamental principle of a database, and
c. it obsesses about the non-reality in the "theoretical" world. They love their fantasies, the fantasies are very very important, given the abject ignorance of the real world.)

Therefore MVCC plus anything is also broken. 2PL is a massive add-on to the broken concept of MVCC, in a pitiful attempt to get the broken MVCC to work.

(There are no advantages to MVCC. I have asked Nicola to enumerate any claimed advantages, but there are none thus far, nothing for me to respond to.)

7. Repeating
<<Two-Phased Locking>
not Two-Phased Commit, is required for Version-based program suites that not SQL compliant , such as Oracle and all the freeware/vapourware/nowhere suites, in a feeble attempt to provide some semblance of concurrency control (some fraction of [1] Ordinary Locking).

----

The precise issue that is well-known and well understood in the high-end of the market (Sybase has 95% market share in financial institutions, but only 5% in the overall market), as implemented for THIRTY YEARS, that the sow-suckers have not touched in THIRTY YEARS, which is by evidence pathological denial of the real world (in both terms of OLTP requirements, and actual platform delivery) for THIRTY YEARS, is this.
- OLTP demands ACID (we have had that since the 1960's)
- Transactions require ACID (we have had that in DBMS since the 1970's; RDBMS since 1984)
- Versioning (MVCC) and ACID Transactions are contradictory concepts by virtue of the principles (separate to the fact that ACID Transactions are real, and MVCC is unreal -- a fantasy, that requires massive resources to erect, and thus is fundamentally opposed to OLTP -- )
--- if you have MVCC you cannot have OLTP or ACID Transactions
--- if you have OLTP or ACID Transactions, you cannot have MVCC

The evidence of over thirty years in the real world is:
- theoretically canonised pig poop such as Oracle or PissGrisNONsql which are MVCC does not have ACID. And cannot have ACID as long as they have MVCC.
- high-end SQL platforms such as Sybase and DB2/LUW have Ordinary Locking; OLTP; and ACID Transactions, with no MVCC. Genuine performance, without the erection of monstrous MVCC fantasies, that do not work, or the second set of monstrosities of 2PL that is required to shore up the first set.

As evidenced, the obsessed cretins, in pathological denial of the reality that their precious academically concocted method does not work (Stonebraker was classified as an academic), and has not worked for over THIRTY YEARS, and in pathological denial of the methods that do work in the real world, and has worked for SIXTY YEARS, throw more "ressoich" at their mountain of pig poop, in the steadfast insistence that what does not work, what has not worked for THIRTY YEARS, will work, some day. It is a massive and ongoing fraud.

Maybe in another thirty years.

Meanwhile, back at the farm, educated people, who are not crippled by the pig poop that the "academics" and "theoreticians" who allege they serve this space shameless market as "theory", provide OLTP; ACID Transactions; using Ordinary Locking, with no problem whatsoever. For over thirty years.

Cheers
Derek
Message has been deleted
Message has been deleted

Derek Ignatius Asirvadem

unread,
Jul 25, 2020, 4:07:30 AM7/25/20
to
Guys and dolls

This linked post is relevant to this thread, particularly at this point in the sequence. Please read.

____https://groups.google.com/d/msg/comp.databases.theory/umEPHEi5FA8/YuBGXzqiCQAJ

That whole thread is relevant to anyone following the Open Architecture discourse.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 26, 2020, 9:53:30 PM7/26/20
to
> > On Saturday, 14 March 2020 00:11:23 UTC+11, Nicola wrote:
>
> MVCC has its drawbacks and some advantages, especially re concurrency
> and performance, compared to 2PC [Ordinary Locking].

Still not enumerated, still no evidence for the claim.

In my previous post, I have clarified, that the relevant comparison is:
____Ordinary Locking vs MVCC

> Systems that implement MVCC sometimes
> do also provide explicit lock mechanisms for the situations where
> a 2PC-like behaviour is required.

Therefore, more accurately, because MVCC does not work and needs "2PC-like behaviour" [2PL] to not-work a little better, the comparison should be:
____Ordinary Locking vs MVCC + "2PC-like behaviour" [2PL]

The "academics"; "theoreticians"; "researchers"; "professors"; "lecturers", are all severely arithmetically challenged (not mathematically challenged, which they are, overall, but in this instance, arithmetically [addition; subtraction; etc] ).

Let's see if we can enlighten them a little bit.

ACID, OLTP & Ordinary Locking
- the principle is Serialisation
- maintains a single version of the Fact, a single version of each page
--- no additional versions, therefore no additional pages
--- no serialisation of additional versions, because there are no additional versions
- a simple locking strategy (please ask if you would like more detail)
--- which is performed by in-memory structures (hash tables; lock chains; etc)
- minimal OLTP: works fine with demented databases and RFS
--- full OLTP requires SQL code to be constructed to the OLTP Standard
- no conflict resolution, because conflicts have been prevented

MVCC + 2PL -- [ACID, OLTP Not Possible]
- the principle is Isolation, not serialisation
--- which is stupid, because at the COMMIT TRAN, ooh oooh, the freaks have to serialise the entangled log jam of versions of pages
--- the deranged obsess about their subjective reality, which is:
--- a. in contradiction to the objective reality of the common database that they have their furry paws on, and
--- b. therefore they elevate the notion of ISOLATION (the [I] in ACID) to the level of deity, while
--- c. denying pathologically that all four components of ACID are integrated (cannot be isolated), and
--- d. denying pathologically that even in that isolated ISOLATION, the ISOLATION LEVEL required is SERIALISED. Which is guaranteed to bite them in the bum when they get to the COMMIT

- for every user that indicates an update intent (BEGIN TRAN), it maintains
--- a version of each page touched
--- a version of each page that a touched page is dependent upon
- at COMMIT time, it attempts to serialise the *Transaction* (not page), all the versions of pages touched by that Transaction, versus all the versions of pages that were either before or after the touched pages, which means a massive amount of work, on a massive data set, including resolution of conflicts

Arithmetic
Memory space for pages
- Single version of a page = 1
- Multiple versions of a page = n, where n is every active user, and at least greater than 1
Memory space for dependency pages
- Single version of a page = 0
- Multiple versions of a page = m, where m is at most every active user, and at least greater than 1

____ 1 < n
____ 1 < m

Memory space for Locking (OL)
- minimal: locks and lock chains = p
Memory space for Non-Locking (MVCC)
- zero, but no tickee, no workee = 0
Memory space for Non-Locking (MVCC) to not work a little better (2PL)
- minimal: locks and lock chains = q

____ p > 0
____ p <= q + 0

----

The arithmetic that is required to figure out that if they implemented Ordinary Locking in the first place, they would not have to support multiple false versions of the single version of the truth in the second place, and "2PC-like behaviour" [2PL] in the third place, is too much for the hackademicks and feareticians to contemplate. I will leave that for another post.

----

> The consensus seems to be that such
> applications are a minority and for the rest MVCC is adequate.

Fantasy, not fact.

That, as usual, allows the freaks to maintain their divorce from the real world, their pathological isolation in their ivory towers overflowing with pig poop, from what we have had since the 1960's, the reality that most systems are contentious on a small percentage of data. Which is again, evidenced by the masses of problems in MVCC systems. Which is what motivated the freaks to *ADD* "2PC-like behaviour" to MVCC.

This is precisely the reason that even a small data cache elevates the overall performance of a multi-user database. Noting of course that the freeware/vapourware/noware and Oracle have no cache.

Cheers
Derek

Nicola

unread,
Jul 29, 2020, 11:42:49 AM7/29/20
to
On 2020-07-27, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> > On Saturday, 14 March 2020 00:11:23 UTC+11, Nicola wrote:
>>
>> MVCC has its drawbacks and some advantages, especially re concurrency
>> and performance, compared to 2PC [Ordinary Locking].
>
> Still not enumerated, still no evidence for the claim.

It seems to me that you believe that systems based on MVCC cannot
provide true serializability (I qualify "serializability" with "true" to
make it clear that it means something different from the SERIALIZABLE
keyword in SQL—the latter having an ambiguous definition).

First, let us agree on what "true serializability" means: concurrent
execution C of transactions T1,...,Tn is (truly) *serializable* iff
execution C is equivalent to a serial execution S of T1,...,Tn, where
"equivalent" means that (*)

(a) C and S contain the same operations (reads and writes);
(b) conflicting operations (r/w or w/w on the same object) appear in
the same order in C as in S.

(*) not an entirely accurate definition, but sufficient for our
purposes.

Traditionally, the way to achieve true serializability is 2PL plus
predicate locking. Elegant, relatively simple, and still relevant!
I love 2PL, just in case you think I want to dismiss it as obsolete or
something.

Also traditionally, MVCC does *not* provide true serializability, but
only (snapshot) isolation. Your critique of such "classical" MVCC is
entirely justified.

Since about a decade (Cahill, "Serializable Isolation for Snapshot
Databases", 2009), though, it is known that MVCC can be modified so as
to provide true serializability, while still keeping its main
properties, e.g., that writers do not block readers and readers do not
block writers.

So, both 2PL and (modern) MVCC can be used to achieve true
serializability (that is, by definition, correctness). Compared to 2PL,
MVCC does not block a transaction in the presence of concurrent r/w or
w/w, so in this sense it is "better" than 2PL, because it increases
concurrency. On the other hand, under MVCC a concurrent schedule is
equivalent to *the* serial schedule given by the ordering of the
timestamps of the transaction. So, there are interleavings of operations
that would be accepted under 2PL (=> they would produce a correct
result), but they are rejected under MVCC (i.e., they cause one or more
transactions to rollback). So, MVCC also restricts the possible
interleaving of operations, thus *reducing* concurrency.

IMO, 2PL vs MVCC has no clear winner. In my previous comments I reported
what I have found in recent literature, i.e., that most recently
developed systems adopt MVCC rather then 2PL, and that the reason seems
that MVCC tends to perform better under certain (most?) loads.

Regarding my comments about user-defined locks in MVCC systems, that
refers to the (rare) situation in which, for some reason (usually,
speed), the developer decides to use weaker isolation than true
serializability and take the responsibility of managing concurrency.
When running transactions with ISOLATION LEVEL SERIALIZABLE in
PostgreSQL, you get true serializability and there is no need for locks.

You may have Oracle in mind when you think about an MVCC implementation.
That falls into "classical" MVCC, so that does not guarantee
correctness.

Nicola

Derek Ignatius Asirvadem

unread,
Jul 31, 2020, 7:27:27 AM7/31/20
to
> On Thursday, 30 July 2020 01:42:49 UTC+10, Nicola wrote:

Thanks for your response.

Before I respond in detail, which I will in a day or so, it appears to me that there is still a wide gap, between what I am trying to explain, and what you have understood from that. Which I take to be my responsibility to close.

Note that this gap is somewhat of a subset of the larger gap, that between what the real world (high-end platform suppliers; their theory-plus-practice engineers; and theory-plus-practice practitioners such as I) has actually had for forty years, and what the "theoreticians" and "academics" who, by the evidence, are totally ignorant of the real world, and steadfastly maintain that divorce, but theorise about their fantasy world. And the products that ensue from that theoretical world, that they impose on the low- and middle-end of the real world: hilarious freeware that purports to be modern but in fact is stone age.

You are one of the very few who has shown some interest in closing that gap. Perhaps it would reduce the need for writing papers that solve problems that we do not have, or that were solved forty years ago.

You have responded to a couple of detail points, but it appears to me that you have not appreciated the full context of my posts.

I am saying the edges or boundaries of the gap is still not being understood for what it is, and may need sharp definition. At the real world end, in hierarchic order, it is:
1. Shared Online Database Mindset (single version of the truth)
-- Online Transaction Processing Standard
2. ACID Transactions and a Transaction Mindset
3. Relational DBMS, Relational Data Modelling (IDEF1X)
4. Open Architecture Standard
5. Objects are pedestrian, simple, used to construct GUIs, and nothing but.

At the "theoretician" end, it is:
1. "I live in a fantasy world divorced from reality. I don't know what Online or Shared means. In my fantasy world, there is on the unholy trinity of Me; Myself; and I. No one else to be concerned about. The snapshot I have taken of the database is permanent. The database will not change while I fiddle around and figure out what to do with my precious snapshot."

That means multiple versions of falsity, because the database is in a state of change.

2. "I Dropped ACID." Famous words of the chief sow sucker.
"I don't know what ACID is, and I don't care to find out. Anything the platform suppliers do to supply ACID is fine with me. As far as theory is concerned, a transaction is:
__ any and all SQL commands between BEGIN TRAN and COMMIT TRAN"

That theory is one which is 100% theory-not-pratice because it cannot be placed in a practical situation, which is limited by the Laws of physics, such as applies to transaction size; concurrent users; etc. And therefore the required theory-for-practice is completely absent. For forty years (sixty years if taken from 1960 CICS/TPC).

3. 1960's Record Filing System, fraudulently credited to Codd (RM/T), placed in an SQL container for convenience (ease of access; backup & recovery; possible security; etc).
No data modelling.
No Standards.
"Because I did not invent them."

4. An OO, and later an OO/ORM Mindset.
"I have just one tool in my toolbox, an Object. Genuflect at the door, and bring me your oldest son for a holocaust, we can eat afterwards. The entire world can be perceived in terms of My Glorious Object. I can do Anything with My Precious Object. There is nothing that cannot be done via Some Wonderful Object.

"Oh crap, I just need it to be persistent, in order to enforce my picture-perfect Object that contains my ancient permanent snapshot (see, it is still in my hands, it has not changed) of the records, onto the real world that rendered the snapshot obsolete ages ago.

"At the end, we always glorify our great Saint of Porcine Recta, Saint Stonebraker himself, and spell out P E R S I S T E N T using pieces of pig poop, and V E R S I O N using chicken bones. And last, we glorify him for justifying and validating the cult of the unholy trinity."

"Oh crap, the OOs were not correct. No problem, I can change it in a month or three. Oh crap, that means the Glorious ORM that has never worked needs to be completely changed. No problem, I invented Refactoring."

"Never mind that the structure of the universe (real world) has not changed, that a Relational database that implements the facts about the universe does not need to change. I don't know anything about the real world or the science that implements it in a computer system. I am obsessed with My Glorious Object and My Persistent Snapshot."

Schizophrenia. Certifiable, pathological schizophrenia.

Plus delusions. Imposed on the young, so that the delusions are now an Ideology, The Ideology for the "theoreticians" and "academics" in this field.

----

Please see if you can appreciate that gap, and perhaps respond to some of it.

----

Clarification request.

> (I qualify "serializability" with "true" to
> make it clear that it means something different from the SERIALIZABLE
> keyword in SQL—the latter having an ambiguous definition).

News to me. Forty years of platform suppliers supplying SQL-compliant platforms, thirty six years of me using SQL with no problem whatsoever, which means SQL has a definition for SERIALIZABLE that is not ambiguous. And now this, from academia. Please explain.

----

I will respond to the detail in your post in a day or two.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 31, 2020, 7:59:39 AM7/31/20
to
> On Friday, 31 July 2020 21:27:27 UTC+10, Derek Ignatius Asirvadem wrote:

> Please see if you can appreciate that gap, and perhaps respond to some of it.

You may wish to familiarise yourself with a thread re Transactions. As usual, The "theoreticians" were so scared of admitting that there were clueless about the subject, despite my efforts to assist them, the thread did not progress very far, and they were able to reinforce their miserable ignorance and divorce from reality. Nevertheless, my posts therein does have relevant material, which neither you nor I should have to repeat in this thread.

The Theoretical Void in the Relational Database Space - Transactions
__ https://groups.google.com/forum/?oldui=1#!topic/comp.databases.theory/qqmnhu036FQ

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jul 31, 2020, 8:21:32 AM7/31/20
to
> On Friday, 31 July 2020 21:27:27 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Thursday, 30 July 2020 01:42:49 UTC+10, Nicola wrote:

> I am saying the edges or boundaries of the gap is still not being understood for what it is, and may need sharp definition. At the real world end, in hierarchic order, it is:
> 1. Shared Online Database Mindset (single version of the truth)
> -- Online Transaction Processing Standard
> 2. ACID Transactions and a Transaction Mindset
> ...

> At the "theoretician" end, it is:
> ...

I am saying, if we do not agree (have the same definitions for) that OLTP/ACID Transactions is absolute fundament, the backstop, and that the OO/ORM/MVCC crowd are against that, we are not going to get very far discussing subordinate issues such as "2PL" vs MVCC, or whether it works or not.

Even the name is a filthy lie. Sure, it has something to do with Multiple Versions, but it supplies nothing at all about Concurrency, in fact it denies Concurrency, and does not effect Control of any kind, on the Versions or the Concurrency. Thus five vendors have five completely different implementations of it. The honest name is Multiple Version Support, and should be branded as mutually exclusive with OLTP (not "2PL").

A scientific name would be Offline Version Support.

Cheers
Derek

Nicola

unread,
Jul 31, 2020, 5:32:04 PM7/31/20
to
I'll come back to the rest of your posts at a later time. For now:

On 2020-07-31, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
> Clarification request.
>
>> (I qualify "serializability" with "true" to
>> make it clear that it means something different from the SERIALIZABLE
>> keyword in SQL—the latter having an ambiguous definition).
>
> News to me. Forty years of platform suppliers supplying SQL-compliant
> platforms, thirty six years of me using SQL with no problem
> whatsoever, which means SQL has a definition for SERIALIZABLE that is
> not ambiguous. And now this, from academia. Please explain.

What do you mean by "SQL-compliant"? ANSI standard? AFAIR, ISO/ANSI
standard does not formally define the term "serializable" (a formal
definition can be found in any decent database textbook—see also my
previous post) (*). And ANSI's semantics for SQL's statement ISOLATION
LEVEL SERIALIZABLE is not satisfactory, because it is stated in terms of
what anomalies (or "phenomena") should be prevented (dirty reads,
non-repeatable reads, and phantom reads). It is not satisfactory
because, e.g., snapshot isolation does prevent those anomalies, yet it
does not provide serialization. Even worse, not even those phenomena are
well defined in the standard, or in the "classical" literature for that
matter—a more rigorous taxonomy for transaction anomalies has been
proposed only after 1995 (see below) and especially in 2000 by Adya,
Liskov and Neil.

"A Critique of ANSI SQL Isolation Levels" (1995) provides probably the
first insight into the problems of the ANSI standard's definitions, and
it does the job much better than I could do in a few words. Despite that
critique (and subsequent work, see cit. above), the standard has never
been changed.

That's why Oracle can claim that it provides an ANSI-compliant
SERIALIZABLE isolation level, and yet it does not guarantee "true
serializability" (because, AFAICT, it provides only snapshot isolation).


Nicola

(*) The standard just says that "A serializable execution is defined to
be an execution of the operations of concurrently executing
SQL-transactions that produces the same effect as some serial execution
of those same SQL-transactions", but what "having the same effect" means
is left unspecified.

Derek Ignatius Asirvadem

unread,
Jul 31, 2020, 11:55:00 PM7/31/20
to
> On Saturday, 1 August 2020 07:32:04 UTC+10, Nicola wrote:
>
> > On 2020-07-31, Derek Ignatius Asirvadem wrote:
>
> > Clarification request.
> >
> >> (I qualify "serializability" with "true" to
> >> make it clear that it means something different from the SERIALIZABLE
> >> keyword in SQL—the latter having an ambiguous definition).
> >
> > News to me. Forty years of platform suppliers supplying SQL-compliant
> > platforms, thirty six years of me using SQL with no problem
> > whatsoever, which means SQL has a definition for SERIALIZABLE that is
> > not ambiguous. And now this, from academia. Please explain.
>
> What do you mean by "SQL-compliant"? ANSI standard?

Compliant with the version of SQL that was published and available at the particular time that was referenced.
- before 1989, it was the DEC publication which was sourced from IBM
- at 1989, it was ANSI SQL 89
- ANSI SQL 92 ... etc
- after ISO, then ANSI/ISO SQL ...

There is an implication that SQL compliance indicates /RM/ compliance. But traditionally (communications prior to academics waking up in the 2000's, and mixing things that were not mixed up) we treated them as discretely separate items, requiring separate qualification.

> AFAIR, ISO/ANSI
> standard does not formally define the term "serializable"

Why should it ?

(Academics are famous for faulting something in that manner. It is disgustingly dishonest. Eg. the Date; Darwen; the TTM Gulag; et al, have been squealing for FIFTY YEARS that "In the RM, Codd did not define how to design a database ..." or "Codd did not define how to pick one's nose ..."; etc. Anyone who has second year university education knows that a paper has to define whatever is new in that paper, only, it is not incumbent upon a paper to define all terms used, that were defined or extant prior to the paper. Ie. the RM needs to define only the RM, and any new NFs it introduces (which it did), it does not need to define Normalisation or database design or how to suck a sow's rectum. Thus the Date & Darwen crowd place themselves outside honest academia and inside a sow stall; "academia".)

I trust you are not doing that by intent, but you are doing that, unconsciously, from habit of practice in academia.

ACID, and the definitions for [A][C][I][D] and the definitions for OLTP and the definitions for Transaction all existed for twenty years before 1980. And thus OLTP Standard, and Open Architecture Standard also. After SQL, those last two were modulated, to place them in the SQL context, no definition or redefinition necessary (only code changes from eg. IMS or TOTAL code to SQL code).

> (a formal
> definition can be found in any decent database textbook—see also my
> previous post) (*).

No.

If the textbook was written by academics, if it was written after 1980, it is pure filth. Why ?
1. See paragraph above. The definitions pre-existed and do not need an academic "definition", twenty years after the existence of platforms that provided the defined service, twenty years after thousands of corporations implemented it in production systems.

2. Any such "definition" twenty or more years after the fact of its existence in the real world, means one thing only:
-- that the author is clueless about the extant definition
-- that the author seeks to negate the extant definition, in order to redefine the term, such that he can erect his new theory that relies on his redefined term. Ie. to suit their own nefarious purpose. Which is usually anti-ACID, anti-OLTP (refer my previous post), anti-Transaction. SO that they can erect their fantasy and have all of academia engage in mental masturbation, while remaining totally ignorant of twenty years (now forty years) of reality in the real world.

This discussion is a perfect example of that.

It is like my 9 year old niece defining /today/ how to place men on the moon, without reference to the moon landing on 20 Jul 1969. When she figures out the /rocket/ means something that might have existed before she was born, she redefines /rocket/ as fuelled by gluten-free pasta, to which water is added, which affords instant and aggressive propulsion, enough to break away from the gravity of earth. When it does not work, she proclaims loudly that /rockets/ do not work, not that HER redefined rocket does not work. All that can be done only in pathological ignorance of the hundreds of thousands of successful rocket launches.

So, yes, the academic definition of ACID; OLTP; Transaction; etc, published twenty or more years after the fact of extant definitions and extant platforms that provided it, are faulty, they cannot be relied upon, they have a dishonest intent by definition.

Just look at the Date & Darwen freaky redefinition of 1NF. It is ongoing. It serves to split the Atom, the very opposite of Atomic. It has the purpose of redefining 1NF so that it retro-fits their insane concept of "temporal". Meanwhile, back at the farm, Codd's 1NF stands. And back at the pig sty, all of academia masturbate over the ramifications of the redefined "1NF".

Ok, my offending para needs to be restated.

>>>>
> > News to me. Forty years of SQL platform suppliers supplying ACID-compliant
> > SQL platforms, thirty six years of me using SQL to implement ACID with no problem
> > whatsoever, which means the said SQL platforms implement ACID as defined at the time, which was not ambiguous. Eg. SERIALIZABLE is
> > not ambiguous.
<<<<

And then twenty years after the fact, academics have written self-serving books (please, they are not worthy of being classified as textbooks, but granted, they are marketed and used as such) which falsely claim to define concepts that were already defined, and introduce ambiguities that did not exist in the original definition, that did not, and still do not exist in the real world. And then they spend the **NEXT** twenty years arguing, about the self-created ambiguities that do not exist in reality, blissfully avoiding the rather obvious need for an academic: to achieve resolution, to close that alleged ambiguity. No, no, no, they need to preserve the problem, in order to have something to argue about, without resolution, ad infinitum.

The existential definition of an academic.

You can't make this stuff up.

> And ANSI's semantics for SQL's statement ISOLATION
> LEVEL SERIALIZABLE is not satisfactory,

They are quite satisfactory, because the SQL standard does not seek to /redefine/ established pre-existing terms. Scores of DBMS platforms and freeware suites have been implemented. The commercial ones comply with ACID, and have complied with ACID for forty years, prior to this or that version of SQL. The non-commercial ones (ie. created by academics, for academics, and cannot be used in the real world) neither implement ACID, nor SQL, but instead engage in non-stop argument-without-resolution about their redefined definitions of ACID and SQL.

(From this point on, we are onto a different subject, without repeating the above at every point that is relevant.)

> because it is stated in terms of
> what anomalies (or "phenomena") should be prevented (dirty reads,
> non-repeatable reads, and phantom reads). It is not satisfactory
> because, e.g., snapshot isolation does prevent those anomalies, yet it
> does not provide serialization.

1. You have the logic backwards, arguing from what you have in your hands, backwards to what the definition of what you have in your hands should be. Try using the original definition, without reference to what you have in your hands, and then make sure that what you have in your hands complies with that definition. There will then be nothing to argue about.

2. You are making claims (eg. "not satisfactory", "snapshot isolation does prevent those anomalies") which have no evidence, and on the contrary, the mountain of evidence in the real world (where poor souls who are seduced by the academic propaganda have tried to implement systems using such idiotic program suites) is that it does no such thing,

-- "not satisfactory". Since the ACID definition pre-existed the academic redefinition, and the definition as evidenced (forty years of reality, as explained above) is quite adequate, it is the redefinition by academics twenty or thirty or forty years after the fact, that is the problem. Go and fix that.

-- Agreed, it is not a good idea to define something in terms of what it is not, but because these are simple things (great big fat problems to academics, who revel in problems), that are easy to understand, which are in the pre-defined context that we are defining Transaction behaviour and not intimacy with a pig, identifying what it prevents, is quite adequate. As evidenced in forty years of ACID compliant SQL platforms in existence, and hundreds of thousands of implementations that are not affected by the academic notion of "ambiguous" or "not satisfactory".

-- Eg. I can define chastity in terms of what is prevented (any thought or act that involves the sex organs for a purpose other than its created purpose [procreation] ), rather than what it is, which is a long explanation, and only relevant to a person who seeks sanctification, not relevant to an ordinary person who just wants to be good.

-- Snapshot isolation does **NOT** prevent those anomalies, **AND** snapshot isolation introduces new anomalies that do not exist in other Transaction processing models.

-- Further, as I have stated from the outset (this and related threads on c.d.t), snapshot isolation **CANNOT** provide ACID compliance, because it is by definition anti-ACID; anti-OLTP; anti-shared database (breaks the fundamental principle of a single version of the truth; introduces the schizophrenic notion that a private version of the truth, that is hysterically obsolete, is somehow valid).

-- That is to say, the academic notions of Transaction; of ACID; of database, are all false, all redefined twenty to forty years after the fact of their existence and original definition, to suit their academic purpose, of confusing themselves and providing problems that they can argue about without resolution, ad infinitum.

----

Here, try this. I can define both Transaction, and ACID, in one sentence each, using technical English. The definitions will suffice as instruction to developers who code the apps that I design (have sufficed for thirty six years). Admittedly, I do not hire academics, I hire practitioners who demonstrate logical and scientific ability. The definitions are not "unsatisfactory" or "ambiguous".

Sure, for a developer who is not familiar with Standards, a certain amount of education in the relevant Standards is required. Not the basics, but how to perform the basics within the Standard.

Such education will not help academics, because they pretend that there is no Standard, because they did not invent it. Fifty years of pretending that the Relational Model does not exist, that RFS is "The Relational Model"; forty years of pretending that graphical data modelling does not exist; forty years of pretending Transactions and ACID are not defined; twenty years of arguing-without-resolution over their self-created redefinitions that contradict the original definitions.

You can't make this stuff up.

You are one of the very few academics who are trying to understand the real world. Now try defining Transactions and ACID. Without reference to the academic redefinitions, which are already proved to be pig poop, "ambiguous", "not satisfactory". Do the scientific thing, go back to first principles, refer to the practitioner definition from 1960 (or 1965 or 1980, but nothing after 1980, nothing written by an academic, no “literature”).

> Even worse, not even those phenomena are
> well defined in the standard,

(They do not need to be, as explained above. They were defined long before SQL. The SQL Standard needs only to make reference to it.)

> or in the "classical" literature for that
> matter

The literature in this database space in general, and the Transaction space in particular, is 100% pig poop. Divorced from reality.

The academics in this space need to give up, based on their evidenced schizophrenia (denial of reality) and absence of logical or scientific ability, and go back to what they have a natural aptitude for: pig farming.

> —a more rigorous taxonomy for transaction anomalies has been
> proposed only after 1995 (see below) and especially in 2000 by Adya,
> Liskov and Neil.

If those papers do not make reference to the then extant Sybase or DB2 (and later MSSQL) manuals, in which all such phenomena (**AND MORE**) and defined, and how those products handle Transactions and ACID, in order to provide the Transaction and ACID compliance that they had been doing for fifteen years (1995) and twenty years (2000), those papers are by definition pig poop, masturbatory fantasy.

I will tell you right now, only an idiot or a dishonest person needs a taxonomy of phenomena, of "anomalies". An undamaged human being needs only directives. Using my previous analogy, a person who seeks to be good needs only the directive, the fornicator; the dishonest; the criminally-minded, need the long list of actions that are prohibited, and (wait for it) a taxonomy of the prohibitions. Which will never be complete or fixed, because they will keep creating special cases where fornication is "not fornication". The Jew rather famously declares that a man (who mysteriously has his member erect and ready) falling off a roof onto a woman (who mysteriously has her legs open in reception) does not harm her virginity. We can't help people who are like that, not even God can.

> > **AND MORE**

Eg. the Sybase manual enumerates a few more phenomena in that category. Eg. at ISOLATION LEVEL REPEATABLE READ, it prevents **ALL** phantoms, not only he enumerated ones. Eg. the Lost Update (as distinct from a phantom Update), is a famous problem, well known since 1960, and prevented by the OLTP Standard, it is not in the Sybase or DB2 manuals, it does not need to be, because it is outside the scope of the platform, and inside the scope of the developer’s intellect.

> "A Critique of ANSI SQL Isolation Levels" (1995) provides probably the
> first insight into the problems of the ANSI standard's definitions, and
> it does the job much better than I could do in a few words.

Not the first insight.

Not the final knowledge (that does not change), or the last insight.

Humans had the first insight in 1960, and perfected all relevant definitions and required procedures in IBM/CICS/TCP (~1965). I personally saw those documents and started implementing those methods on minicomputers, in 1976, on C/ISAM filing systems, complete with Ordinary Locking (2PL is not required). By 1979 I was working with various pre-Relational DBMS, and did so on those systems.

In 1981, when I moved to Cincom, and supported TOTAL (NDBMS) on minicomputers for North America, I wrote up the Standard for ACID and Transactions on ONE PAGE and supplied all customers with it. 75% of the customers knew what it was, and why it was needed, they appreciated a corporate directive. The remaining 25% needed a bit of schooling, which I gave either over the phone, or in UserGroup seminars. After which they too appreciated the corporate directive.

The point is, I was not alone. Everyone in computers with half a brain was acutely aware of the need. The thought of writing a system that was not Transactional simply did not exist (it is a notion for academics only). The problem space; the sandbox; the scope that called for resolution, was to provide not //just// Transactions but //ACID// Transactions, and to provide it in concert with the DBMS. Which was, and still is, a combination of client-side; server-side; and database constructs. It is only the deranged theoreticians that remain ignorant of Transactions, and expect it to be magically serviced by server-side-only constructs.

Engineers at other DBMS providers were doing similar things. All of us DBMS providers wanted to declare, "we provide the CICS/TPC [the Gold Standard] execution for Transactions and ACID on our xDBMS", and here is the method, how to code it in xDBMS flavour (a simple document).

But of course, the entire colony of academics in this space are ignorant of reality, they have their hilarious communications amongst themselves, about their fantasies that substitute for reality, and that never get resolved. As evidenced here and now in this thread. SIXTY YEARS after CICS/TCP, FORTY YEARS after RDBMS platforms provided absolute Transactions and ACID compliance, they are still arguing about their redefinitions; their anomalies; their taxonomy of anomalies, and in their asylums, nothing is resolved.

> Despite that
> critique (and subsequent work, see cit. above), the standard has never
> been changed.

No need. You are looking for the definition in the wrong place.

> That's why Oracle can claim that it provides an ANSI-compliant
> SERIALIZABLE isolation level, and yet it does not guarantee "true
> serializability" (because, AFAICT, it provides only snapshot isolation).

(Oracle does not provide ACID, therefore it does not provide "ANSI SQL-compliant" anything regarding Transactions, the claim is totally false, and has been false for forty years. Again, the backstop is ACID, not SQL. But I do not think that that is the point we are discussing.)

No, that is not the reason why. You are caught up in the minor differences of the redefinitions, and thus have concluded something that is based on those redefinitions.

The bottom line is, as detailed above, snapshot isolation is anti-ACID; anti-Transaction; anti-OLTP. thus anything that is built on that schizophrenic premise, such as a suite of programs that purport to supply SQL cannot provide Transactions, or ACID, or OLTP.

> > A scientific name would be Offline Version Support.

Ok, then, Offline Snapshot Support.

Which would indicate to those who do not eat pig poop that it cannot support Transactions; or ACID; or Online anything.

In any case, don’t worry about what Oracle can or cannot do. Worry about the Glorious Pusgres, the bastard child of the illegitimate child of Saint Stonebraker, that makes many claims that are relevant to this thread, none of which are true. As evidenced (a) in their own manuals, if not (b) the many systems that report horrendous problems. I am familiar with both. PooGres provides some fraction of snapshot isolation (ie. when it works), it provides no OLTP; no ACID; and therefore no ACID Transactions. Last time I looked only Functions were “transactional”, in and of itself, that means it is nowhere near the SQL compliance requirement.

PigPoopGres provides whatever functionality it purports to provide only by using:
1. snapshot isolation
AND
2. regular Locking
AND
3. predicate lock manager
AND
4. explicit application-level locking in the SQL code (something that is totally automatic in commercial platforms, where they will laugh at such a requirement).

The point is, Offline Snapshot Support (or the fraudulent MVCC, or the “snapshot isolation” that you espouse) simply does not work in a shared database context. According to you, as you have indicated in another post, oooh oooh, it could work when ordinary locking (“2PL” for you) is added, bolted on. Academic promise, in hysterical contradiction of evidenced reality. In evidenced fact, two layers of bolted-on pig poop does not suffice. Only application-level explicit locking //on top of those two layers// works.

For those who use an Online DBMS, all that is automatic, none of that explicit locking code needs to be written in the app. I tell you, when you implement anything in non-commercial “SQL”, it is only a matter of time until the pain is excruciating enough, that you move to a commercial SQL platform, and when you do, you have to rewrite the entire app and all SQL code, because “sql” is not SQL, and all the manual constructs are automatic.

But we can’t do that. If we did, we would no longer genuflect to the great ass of computing, Saint Stonebraker, and we would have nothing unresolved to argue about.

> (*) The standard just says that "A serializable execution is defined to
> be an execution of the operations of concurrently executing
> SQL-transactions that produces the same effect as some serial execution
> of those same SQL-transactions", but what "having the same effect" means
> is left unspecified.

Hideous, yes. Written by an academic, yes. Yet another academic redefinition, yes. Unresolved and unresolvable, yes.

If you would like to exit that galaxy of chaos, and enter the reality of the real world, refer to the original, very short, and fixed (unchanged for sixty (CICS) and forty (RDBMS) years) definition.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 1, 2020, 12:36:50 AM8/1/20
to
> On Saturday, 1 August 2020 13:55:00 UTC+10, Derek Ignatius Asirvadem wrote:
>
> Here, try this. I can define both Transaction, and ACID, in one sentence each, using technical English. The definitions will suffice as instruction to developers who code the apps that I design (have sufficed for thirty six years).
>
> You are one of the very few academics who are trying to understand the real world. Now try defining Transactions and ACID. Without reference to the academic redefinitions, which are already proved to be pig poop, "ambiguous", "not satisfactory". Do the scientific thing, go back to first principles, refer to the practitioner definition from 1960 (or 1965 or 1980, but nothing after 1980, nothing written by an academic, no “literature”).

Come to think of it, in case you take this thread to conclusion, in order to reduce typing for both you and me, the order required for the definitions are:
1. Define OLTP
2. Define Transaction, which must be in the context of [1]
3. Define ACID, which must be in the context of [1][2].

To be clear, I am not asking you to define OLTP Standard, which guarantees the highest level of OLTP; no deadlocks; etc, which is commercial; already defined; and fixed. But in the progress of [1][2][3], you may come to appreciate that *A* standard is required for OLTP, and that the elements are not in the server or suite of programs pretending to be a server, but dependent upon the server or suite behaving a certain way. Which is why commercial DBMS make a claim to be an "OLTP Server" or not.

Caveat
A definition that ignores the Laws of Physics, ie. that is not practicable, is not acceptable. Eg. the theoreticians' infamous definition that a transaction is anything and everything between a BEGIN TRAN and a COMMIT TRAN.

If you wish to use a real world example, by all means, do so. Feel free to use this, which I recall you may already have some familiarity with:
__https://www.softwaregems.com.au/Documents/Documentary%20Examples/Order%20DM%20Advanced.pdf

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 1, 2020, 10:51:06 AM8/1/20
to
> On Thursday, 30 July 2020 01:42:49 UTC+10, Nicola wrote:
> > On 2020-07-27, Derek Ignatius Asirvadem wrote:
> > > > On Saturday, 14 March 2020 00:11:23 UTC+11, Nicola wrote:
> >>
> >> MVCC has its drawbacks and some advantages, especially re concurrency
> >> and performance, compared to 2PC [Ordinary Locking].
> >
> > Still not enumerated, still no evidence for the claim.
>
> It seems to me that you **believe**

1. I am a scientist. For 2,400 years up to 1911, science /scientiam/ means knowledge. That means certainty. Modern "science" includes al kinds of filth such as subjectivism; relativism; aneurism; etc, that has changed science into speculation. Which needs consensus and citations. Real science does not need consensus or citations, the knowledge is there, one either accepts it or rejects it.

In important matters, such as database science, I have no opinions, no beliefs (that is religion, not science). In important matters, I have only knowledge, certainty. I would not dare to argue with academics or "academics" in a public forum on the basis of belief, I do so only on the basis of knowledge, certainty. Unlike your claims, I can support all my declarations with evidence.

I am not limited to materialism ("evidence-based"; "empirical"; etc). For 2,400 years, science included LOGIC. Modern "science" excludes logic. I insist on logic. This is sometimes know as the Four Laws of Thought. They are the Four Laws of Sanity. I reject anything the fails those Laws as insanity.

Eg. the climate change hysteria. 31,000 Modern "scientists" say one thing, 16,000 "scientists" on the other side say the opposite. This is insanity, lay people should not be in a position such as one that the "scientists" do not agree. I am responsible enough that when something wrong or false enters my science, I do something about it, and take the freaks to task, thus keeping my science clean, devoid of insanity.

Prior to 1911, the madness about climate "science" would not happen: the scientists on the correct side would destroy the other side, and we would be left with just the correct science. No insane quandary that drags on, unresolved, for decades. Billions spent, with no change whatsoever.

But we lost the war. Those who prosecuted the war have inserted insanity into science, into the minds of ordinary folk. The current epoch is the enslavement of the intellect. The education system does not educate, it only indoctrinates.

> that systems based on MVCC cannot
> provide true serializability

Therefore no. I am not here to discuss beliefs, and I have not shared my beliefs or religion. I am here to discuss facts, that can be confirmed, only.

On the basis of knowledge; certainty; science; logic, I am stating categorically, that an Offline Version based systems (MV, falsely called "MVCC" or "snapshot isolation") cannot provide OLTP, or Transactions or ACID Transactions. And therefore yes, it cannot provide the serializable in the SQL Isolation Level as required. And therefore the pos is nowhere near SQL, or ANSI/ISO/SQL compliant.

> (I qualify "serializability" with "true" to
> make it clear that it means something different from the SERIALIZABLE
> keyword in SQL—the latter having an ambiguous definition).
>
> First, let us agree on what "true serializability" means: concurrent
> execution C of transactions T1,...,Tn is (truly) *serializable* iff
> execution C is equivalent to a serial execution S of T1,...,Tn, where
> "equivalent" means that (*)
>
> (a) C and S contain the same operations (reads and writes);
> (b) conflicting operations (r/w or w/w on the same object) appear in
> the same order in C as in S.
>
> (*) not an entirely accurate definition, but sufficient for our
> purposes.

i don't have the need for such jumping through hoops to narrow the scope of "serializable" to make it fit into something or not fit into something else. I have only the simple and straight-forward definition of what serializable is, what it has been since 1960. It has been implemented in genuine SQL compliant platforms since 1980 (which implies the definition was perfectly good enough). I could not care less about what the piglets that pass for "academics" or "theoreticians" in this field redefine it to be.

Put another way, all my Transactions since 1984 (a few thousand, mostly designed, that developers write, according to my standards, but also a few hundred that I have written myself) are serializable. According to the "serializable" that exists in the SQL platform, since 1984. Running in production systems for decades, with zero contention, zero deadlocks.

But as an academic, you are free to create hoops to jump through, and then to expend energy jumping through them. The PusGres manuals are chock full of such nonsense, and they use it to justify not implementing ACID; not complying with the ANSI/ISO/SQL requirement ... while marketing that they do.

At least you are finding a point of resolution, whereas PoopGres leaves the whole thing unresolved.

> Traditionally, the way to achieve true serializability is 2PL plus
> predicate locking.

Nonsense. Where did you hear that ? Don't worry about the "literature", it is all false, and all in denial of sixty years of commercial systems (IBM/CICS/TCP, forty years for RDBMS). 2PL is only required for Two Phased Commit.

There is no need for predicate locking either, whatever that doorknob in hell is.

The evidenced facts of reality are this. OLTP; OLTP Transactions; and ACID Transactions have traditionally been provided by Ordinary Locking (not 2PL). It is a client-server architecture.

(Because there is not two phases in the client, and there is not two phases in the server. There is no "expansion" vs "contraction" phase.)
- IBM/CICS does not have that crap, it has only Ordinary Locking
- IBM/IMS does not have that crap, it has only Ordinary Locking
- Britton-Lee does not have that crap, it has only Ordinary Locking
- IBM/System R does not have that crap, it has only Ordinary Locking
- IBM/SQL/DL1 does not have that crap, it has only Ordinary Locking
- DEC/VMS/Rdb does not have that crap, it has only Ordinary Locking
- IBM/DB2 does not have that crap, it has only Ordinary Locking
- Sybase does not have that crap, it has only Ordinary Locking
- MSSQL does not have that crap, it has only Ordinary Locking

None of the above allow application-level locking, because that would interfere with the internal lock manager, and degrade performance to that which teh developer can forsee. We do not allow such madness. Exception: yes, tables can be locked for maintenance purposes only.

(All the above additionally (over and above the SQL compliance requirement) provide 2PC, which is outside the server, and sure, for that, it uses simple peer-to-peer protocol, not the "2PL" that "snapshot isolation" needs, along with Regular Locking plus predicate locking plus explicit application-level locking.)

- I don't know what Oracle has (ie. on top of the "MVCC"), but it does not work at all, and it is not SQL compliant, so we really should not count it. Further, while it is commercial in the sense that it is sold for a fee, and it has a substantial market share, it is certainly not commercial in the sense that it is an architected server (it is not) or that it provides confidence in operation (it does not).

- Academics love Ingres, but it never made it past five online users, and never made it to production. Don't you dare try to include that illegitimate child of Saint Stonefracker. Illegitimate because although he took credit for it and anti-designed it, he wrote only a few lines of code, and left other academics to finish it. Which never happened, they were forever finding problems which traditional systems did not have, and it never got past five users. If he were a responsible human, if he were legitimate, he would have finished it himself, and taken full credit for proving that the filth he excreted actually worked.

Mind you, that has not stopped academics from loving his bastard child of his illegitimate child. And that has not stopped academics from expecting miracles from something that never produced one.

I have refuted your claim. If you want further evidence, please ask. But actually, logically, legally, since you are making the claim, it is up to you to provide evidence for your claim, that a traditional platform does such idiotic things.

> Elegant, relatively simple, and still relevant!
> I love 2PL, just in case you think I want to dismiss it as obsolete or
> something.

By "2PL". I think you mean Ordinary Locking, in which case, I agree.

By "2PL", if you mean the filth that Poopgess has, which is only required because they found out, ooh oooh, "MVCC" does not work as the freaky academics promised, and so they bolted on a lock manager ("regular lock manager" ?), no, I disagree/ It is a bucket of warm pig poop.

Later still, because they found out, ooh ooooh "MVCC" plus "2PL" does not work as the freaky academics promised, so they bolted on a predicate lock manager. Sure, that needs two phases. Probably 42 phases, because as evidenced, one can never trust an academic's promise.

The comparison is not what you say it is. The comparison is between:
Ordinary Lock Manager
__ (no versions stored)
__ Works
vs
"MVCC" plus "regular lock manager" plus "predicate lock manager"
__ (masses of target versions stored)
__ Does NOT work

> Also traditionally, MVCC does *not* provide true serializability, but
> only (snapshot) isolation. Your critique of such "classical" MVCC is
> entirely justified.

Thank you.

So that means, when they declared for twenty years that it did provide serializable, they lied about it. That is one reason you can never trust an academic. They might even mean well, but as evidenced, they are clueless about the subjects they write about.

> Since about a decade (Cahill, "Serializable Isolation for Snapshot
> Databases", 2009), though, it is known that MVCC can be modified so as
> to provide true serializability,

When you say "known", you mean it in the Modern "science" sense, that an academic somewhere postulated that it is possible. What scientists call speculation. It is not what scientists call knowledge. No proof whatsoever. No trial in the real world. But hey, masses of citations.

It the bottom of the barrel of the real world, the latest version of PusGres provides a non-working suite of interacting programs that purport to provide some nonsense along those lines, in three massive layers. It is still only speculation, no one knows if it will work or not. PooGres slaves are requested to "beta test" the "software".

> while still keeping its main
> properties, e.g., that writers do not block readers and readers do not
> block writers.

Come on. Please. You know that that is a mantra, you know that has no meaning at all, except to keep the faithful in a prayerful state.

You are an academic. Please keep stupid marketing horse manure like that out of our discussions.

Scratch the surface just a tiny bit, and what do you see ? Wait until a user is hung and what do you see ? Examine why a deadlock occurred when it should not have and what do you see ? First, you see evidence that the stupid mantra is a stupid mantra, and you can stop stupidly mantra-ing.

Second, you see a lock manager at work, making sure (badly) that writers **ARE** blocked by readers, and thus have to wait. Your "2PL" in action does that. The predicate lock manager does much worse, it has to synch entire lock system assets.

Third, you see a lock manager at work, making sure (badly) that readers **ARE** blocked by writers, and thus have to wait. Your "2PL" in action does that. The predicate lock manager does much worse, it has to synch entire lock system assets.

Fourth, the poor sods write application-level explicit locks (which traditional platforms absolutely do not permit), and even that does not work.

https://www.postgresql.org/docs/current/explicit-locking.html
https://www.postgresql.org/docs/12/transaction-iso.html
https://www.postgresql.org/docs/12/view-pg-locks.html
https://www.postgresql.org/docs/12/wal-async-commit.html
https://www.postgresql.org/docs/12/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT
https://wiki.postgresql.org/wiki/MVCC_violations
https://medium.com/@rbranson/10-things-i-hate-about-postgresql-20dbab8c2791
https://brandur.org/postgres-connections

I plain vomited on all of those pages, but I cacked myself on the "asych commit". ROTFLMAO. What mickey mouse did for minnie when she did not put out.

In commercial systems, we don't have to write manual backups or connection pooling, it is in the platform. Just one command to configure it or to modify the config. Just one command to monitor the config.

The poverty-stricken ignorant academics do not even have monitoring, which we have had sine the first version (1980).

Here is what a Ordinary Lock Manager looks like. Quick overview:
https://www.softwaregems.com.au/Documents/Sybase%20GEM%20Documents/Sybase%20Lock%20Manager.pdf

Sybase manual (chase the HTML pages as per your interest or download PDF):
http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc00938.1570/html/locking/title.htm

Here is what monitoring looks like:
(I have consolidated 24 x 1-hour reports produced by the server, into one tabular report for the day, and then split the day into the three relevant periods, using a simple awk script). You don't have to read the whole thing, just get an overview. The metrics should be self-explanatory. Obviously, give the Lock Manager section a bit more attention. I present all metrics in context, meaning that some lock metrics are located in other sections:
https://www.softwaregems.com.au/Documents/Article/Sysmon%20Processor/Sysmon%20Processor%20Eg%20CaptureGroup.pdf

Here is what a comparison looks like. This shows a day total, before & after one of my tuning operations, and a Delta:
https://www.softwaregems.com.au/Documents/Article/Sysmon%20Processor/Sysmon%20Processor%20Eg%20Date%20Delta.pdf

And of course, with a few keystrokes plus Excel or Numbers, you can create any kind of chart that you need, something like this. This one is diagnostic, to illustrate a problem:
https://www.softwaregems.com.au/Documents/Documentary%20Examples/sequoia%20091019%20Server%20Public.pdf

As the chinaman says, you pay peanuts, you get monkeys. You pay zero, you get zero, forty to sixty years behind commercial platforms, no operational confidence, and with a huge maintenance load. Forget performance, don't even mention it. You get a snail making its way across the shell of a tortoise.

> So, both 2PL and (modern) MVCC can be used to achieve true
> serializability (that is, by definition, correctness).

The usual promises from academics.

> Compared to 2PL [Ordinary Locking],
> MVCC [plus regular lock manager plus predicate lock manager] does not block a transaction in the presence of concurrent r/w or
> w/w, so in this sense it is "better" than 2PL, because it increases
> concurrency.

Nonsense. Note my clarifications.
That is a nonsensical, unscientific statement. It demonstrates (a) that you have no understanding of actual concurrency and (a) concurrency issues (what promotes or hinders it), it is simply speculation. And of course, again, no supporting evidence or logic.

- whether a transaction blocks another, or not, is outside the control of the server, inside the control of the transactions (transaction design). There is nothing you can do about it at the server level, nothing can increase or decrease concurrency. Everything that increases or decreases concurrency is in the transactions. Therefore the fat chick on the left is neither "better" nor worse than the thin chick on the right. The comparison is meaningless, the claim on top of that is doubly so.

- the mantra is false. When you get "MVCC" to work a little by adding a regular lock manager, and get it to work a little more by adding a predicate lock manager, IT BLOCKS both readers vs writers and writers vs writers.

- "MVCC" without the bits that are required does absolutely nothing: it provides multiple private versions so that schizophrenics can believe their non-reality, that is all. It resolves nothing. I have already explained it does no concurrency control whatsoever. If and when one of those schizophrenics COMMITS, it then churns through millions of rows and tries to synch the versions, with no guarantee that it will work, or that data will not be corrupted, or that the transaction is Atomic.

> On the other hand, under MVCC a concurrent schedule is
> equivalent to *the* serial schedule given by the ordering of the
> timestamps of the transaction.

(I think it is given by the ordering of the timestamps of the versions, not transactions. But hey, I am not expert in Offline Version Support. And there are several, quite different, implementation models.)

> So, there are interleavings of operations
> that would be accepted under 2PL (=> they would produce a correct
> result), but they are rejected under MVCC (i.e., they cause one or more
> transactions to rollback). So, MVCC also restricts the possible
> interleaving of operations, thus *reducing* concurrency.

Yes and no.

First, yes, and well said. It handles the sub-normal case, which is ignorant developers, no OLTP Standard, and thus pathetic transactions.

Second, no. In the normal case, where all transactions comply with the OLTP Standard, there is no "interleaving" of operations within a transaction. For contentious transactions only, one blocks at the earliest instant, and one waits the shortest wait time, because the blocking transaction is also super-fast, precisely because there is no "interleaving". Again, sorry, but you are demonstrating that academics have absolutely no clue about Transactions; transaction issues; Transaction design; concurrency (in the context of Transactions), or how to increase it; contention, or how to decrease it.

Since I have provided the link, this just happens to be a decent example, of what a capable tuner can do. Obviously, I have tuned the server, only, I have no control of the app code, and I would not want to change it: the app is a massive third party hospital system. The transactions are so-so, they have the minimal constructs and design, and thus work as expected, without dramas, but not the OLTP Standard, and thus there is a ceiling re tuning. Capable, meaning all that real world knowledge enumerated above. Just inspect the Delta for the "lock" metrics.
https://www.softwaregems.com.au/Documents/Article/Sysmon%20Processor/Sysmon%20Processor%20Eg%20Date%20Delta.pdf

Notice the transactions; the mix; and the I/O has not changed. I have tuned only the caches, and the config parameters (no additional resources or memory, the server is on an LPAR). Notice that the locking dropped by 90% and the CPU usage by 50%. And no surprise, all queries and transactions were much faster (no metrics for that in this report, or the local DBA did not bother to grab those metrics, they are outside the server). The point is, I could not have achieved that, no one could, unless they understood (a) transactions intimately, wrt what they are doing as generic transactions (I have no idea at all about the content ... some of them are large, handling medical images), and of course (b) Sybase server internals.

> IMO, 2PL [Ordinary Locking] vs MVCC has no clear winner.

Well, that is an unsubstantiated opinion. With no claim re resources or the basis upon which one wins or loses.

Whereas I have given a little arithmetic re actual resources used, to assist in understanding the issues, and to provide a reasonable overall (not narrow) comparison. I would ask you to understand the tiny cost of Ordinary Locking vs the massive cost of storing multiple versions, and administering them (regardless of the versioning model), and respond to that.

> In my previous comments I reported
> what I have found in recent literature, i.e., that most recently
> developed systems adopt MVCC rather then 2PL, and that the reason seems
> that MVCC tends to perform better under certain (most?) loads.

Pure academic drivel. Marketing. In contrast to the evidence in the real world. Wherein, no one cares about "recently developed" platforms, there is a massive foot print in the world for platforms that have forty years of maturity, that freeware is incapable of affecting.

The real world evidence is that they do NOT perform better, they perform abysmally.

By "systems", if you mean apps and not platforms, then they should not be preferring or adopting or specifying a locking model. The app has no business declaring such things.

The one and only reason that such imbeciles do make such preferential statements is that they are lead by the nose, to follow the idiotic and evidently false marketing hype produced by academics and theoreticians. Every idiot wants, and genuflects before, the meaningless and evidently false Stonebraker mantra, without understanding that it is meaningless and false.

The marketing is so successful, that it has become a tick-box issue, meaning that the purchase of a SQL platform now has an additional tick-box for "MVCC", that a bean counter with no understanding of what each tick-box means wants to tick all the boxes before making a purchase. This is separate to the fact that the developer is clueless.

Thus Sybase and MSSQL have added "MVCC" to their servers. Note, added. That means they provide all the OLTP and ACID Transaction guarantees via Ordinary Locking (your "2PL"), wherein nothing has changed. And for folks that are stupid, who demand "MVCC", one has to configure a separate temporary database to hold the multiple versions that have no relevance at all to the operation of the server, plus a bunch of configuration parms to control the administration of snapshot that are isolated. Of course, a private data cache can be configured. In Memory Row Store is also provided if the cache is not fast enough and the idiots want even more speed for their private fixed snapshots of the ever-changing reality.

Basically, at the SQL coder level, there is two new optional parms to the SET TRAN ISOLATION LEVEL, additional to the usual four ANSI/ISO/SQL Isolation levels
__ set tran isolation level transaction snapshot
__ set tran isolation level statement snapshot

Yes, the locking will block (because there actually is a lock conflict on a resource, regardless of whether the schizophrenic thinks his private fixed version from an ever-changing database is the only version on the entrie planet). But rest assured, such blocking is completely invisible to the user of such pathetic "transactions", because the pathetic "transactions are two or three orders of magnitude slower than the blocks getting cleared.

No change to the no-explicit-locks rule.

What this really means is, such idiots have no clue about OLTP or Transactions or ACID, their "transactions" are all over the shop. Because there is no "literature" from the piglet squad re any of those subjects, and only marketing hype to go for "MVCC". Thus Sybase provides an elegant option to service such terrible "transactions". If and when the backward developers learn about OLTP; etc, they can upgrade their pathetic code, and use less resources, and accrue less locks, and gee whiz, run faster. In thirty years or so, they might achieve OLTP. But don't hold your breath, they are waiting for the piglet squad to write even one paper on the subject.

Sybase does not repeat the idiotic Stonefracker mantra.

https://help.sap.com/viewer/a1237e466dba417da6f0e5504cf9fb83/16.0.3.8/en-US/ca321b0f87754c71bcfd7385ac956aa1.html

(I have given you a single link, feel free to browse the ToC on the left panel to obtain all items relevant to MVCC implementation.)

In a nutshell, Sybase provides an elegant and configurable asylum for the schizophrenics, while leaving the normal humans completely unaffected, and providing the same Transaction guarantees for all. Meaningless tick-box created. Meaningless tick-box ticked.

> Regarding my comments about user-defined locks in MVCC systems, that
> refers to the (rare) situation in which, for some reason (usually,
> speed), the developer decides to use weaker isolation than true
> serializability and take the responsibility of managing concurrency.

Not so rare.
People use it all the time, for a different reason, to overcome the horrible performance or unpredictable lock behaviour of PusGres.

> When running transactions with ISOLATION LEVEL SERIALIZABLE in
> PostgreSQL, you get true serializability and there is no need for locks.

Nonsense. Sure, there is no need for explicit locks (except for other reasons, identified above). But internally, the bloated and obese thing uses a "regular lock manager" plus a "predicate lock manger" plus a small mountain of lock-manager synchronisation issues. IOW, it is locking all the time. Which is fair enough, because there is no concurrency control in "MVCC", and concurrency control can only be obtained via a lock manager. And when dealing with pathetic freeware, written by idiots, they re-invent the wheel from scratch, forty and sixty years behind commercial platforms.

> You may have Oracle in mind when you think about an MVCC implementation.

No, I am familiar with both. Oracle is a piece of dog crap but at least it is commercial and mature.

> That falls into "classical" MVCC, so that does not guarantee
> correctness.

No. Neither oracle nor Pusgres guarantee correctness. Instead of repeating mantras, just read the manual pages I have linked. it guarantees nothing, and documents excuses for data corruption and incomplete "transactions".

At least Oracle does not do that: it makes guarantees but does not keep them !!

It does not matter if they guarantee correctness or not. They simply cannot provide correctness. If you understand what I am trying to communicate, that the principle of "MVCC" (a) breaks the principle of OLTP and thus the entire approach is schizophrenic, (b) that it is based on a false and hidden premise (which I have exposed), and thus anything build on it will fail, you will understand that Oracle and PoopdepoopGres simply cannot work. As evidenced in the real world. Sure, bolting on a lock manager or three will "improve" the chances of working while horribly decreasing performance and throwing the idiot mantra out the window.

Whereas Sybase and MSSQL can and do guarantee correctness, regardless of whether the developer chooses snapshot isolation, because it is FIRST based on a simple lock manager, that enables such correctness, and SECOND it keeps the schizophrenics in a separate nice tiled asylum, where they can affect each other and not the normal humans.

----

I have written a lot in this post, and it is dense. I don't yet know what you do and do not understand (I found out in this thread that, despite the fact that you use technical terms,and I thought you understood more, you have the same understanding re OLTP; Transactions; ACID, as the other academics, which is zero). Please ask if anything needs clarification.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 2, 2020, 2:49:21 AM8/2/20
to
Nicola

Good to have you back.

I trust things have settled down a bit, and the routine is stable. We have over here. We managed to avoid the first wave of the CCP virus for months, but it is happening now.

> On Sunday, 2 August 2020 00:51:06 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Thursday, 30 July 2020 01:42:49 UTC+10, Nicola wrote:

> > So, there are interleavings of operations

What, in Heaven's name, is "interleaving of operations" ?

Excuse me, but the [A] in ACID stands for Atomic. The whole point of Atomic is that you cannot break it up, you must treat it as an Atom.

Therefore there is no such thing as "interleaving of operations" within a Transaction.

There is no point in the server providing ACID for non-ACID [non-Atomic] "transactions".

So since those who indoctrinate young academics into eating pig poop, and doing things that are self-contradictory, self-cancelling, are teaching them to Drop ACID, sure, the "transactions" are not Transactions, they are fragments, sub-Atomic particles, free, divorced from the context that gave them existence, free to dance and fornicate with other divorced fragments. Free to get entangled due to the string that keeps the fragments sort-of connected (the BEGIN of “interleaved” fragments, and the COMMIT). An orgy for the Zombie Generation.

> > that would be accepted under 2PL (=> they would produce a correct
> > result), but they are rejected under MVCC (i.e., they cause one or more
> > transactions to rollback). So, MVCC also restricts the possible
> > interleaving of operations, thus *reducing* concurrency.
>
> Yes and no.
>
> First, yes, and well said. It handles the sub-normal case, which is ignorant developers, no OLTP Standard, and thus pathetic transactions.

Which is why they are not Transactions, not ACID Transactions, but strings of sub-Atomic particles of an Atom that must not be split. Which can only be written by sub-normal humans. Hence, sub-normal.

> Second, no. In the normal case, where all transactions comply with the OLTP Standard, there is no "interleaving" of operations within a transaction.

Because it is not normal to split the Atom.

Because only sub-normal human pig suckers such as Date; Darwen; Fagin; all the textbook writers; all the professors who teach without understanding the subject, merely repeating mantras, actually teach young people to split the Atom that must not be split, and give 42-hour lectures on how that is "science". It is not science, it is schizophrenia.

We interleave Atoms, not fragmented sub-Atomic particles strung together with a piece of string.

If you understand that, and only after you have understood that, consider this. Since the Transactions are small integrated wholes, Atoms, that do not have dangly bits or strings to cause entanglement, they can't get entangled. No locks. No blocking locks.

We interleave Atoms.

Pig suckers do not have Atoms, they have only sub-Atomic fragments.

It is schizophrenic to demand ACID, particularly [A]tomic, from the server (because ANSI/ISO/SQL demands ACID), and then to write non-Atomic “transactions”. No, the Transactions must be Atomic in order to obtain Atomicity in the server in which it executes.

----

Five years ago in the Normalisation thread, you wrote this:

https://groups.google.com/d/msg/comp.databases.theory/Uwc_w8HbBfw/dVkqSLNU-jAJ

> On Tuesday, 24 February 2015 04:27:53 UTC+11, Nicola wrote:
>
> I like to think of SQL's insert, update and delete as the (quite
> powerful) assembly-level instructions of a data language. If you can
> design a model in which a single insert, a single update or a single
> delete never causes any integrity violation, well, go with them. But in
> many (most?) cases, a well-designed database requires modifying more
> than one fact at a time to preserve consistency. In practice, that means
> that single inserts/updates/deletes are too low-level wrt the semantics
> of the database. Hence, their execution must be forbidden (by revoking
> the corresponding privileges in SQL from the users accessing the
> database), and more complex "primitives" must be defined. These should
> be given along with the logical schema and implemented as user-defined
> functions (assuming that such functions are run atomically), whose
> execution is granted to the above-mentioned users instead of the raw SQL
> instructions.
>
> Nicola

I already had a Transaction thread open, so I gave it a full treatment in that thread. For the purposes of this thread, you don’t need to read the whole post. Intereest only.

https://groups.google.com/d/msg/comp.databases.theory/qqmnhu036FQ/RLh9D5Ue1kUJ

In a nutshell, by "primitives”, what you were getting at was ACID Transactions, which ANSI/ISO/SQL has, and has had since 1984. But you would not name them as such. The absence of real world knowledge about SQL is strange. At the time, PooGres did not have transactions, but they did have Functions that were supposed to be “transactional”. In the new release, it has “transactions”, with START TRAN and COMMIT TRAN implemented.

Thus five years ago, you would have placed the code for each logical Transaction in a Function, to obtain “transactional” properties, which is a good thing (Transaction code should be contiguous). And now today, you have to rewrite all those Functions to obtain a higher level of “transactional”, and to prevent the new preventable conditions and to check for the new error conditions and recover (ROLLBACK or retry or not).

None of which is necessary on commercial systems because we have had all that from the beginning. Even reties are automatic, we do not code them.

>>>>
You can’t have real ANSI/ISO/SQL ACID Transactions in the latest release because PoopGres does not provide it: the manuals enumerate some of the conditions which provide an excuse for not providing what we have had in the commercial platforms since 1980. Namely, it has a no design or architecture, and making the server Transactional requires a complete re-write. Which is impossible because there is 30,000 code segments written by 10,000 “developers” spread across the planet. They best you can have is poop squared. Such as Yet Another Layer to provide Yet Another Thing That It Missed. Such as the predicate lock mangler. Which requires synching with the regular lock mangler. Which freezes the already slow system.
<<<<

It seems to me that five years ago you were promoting “primitives” without mentioning ACID Transactions because you are attached to PoopdePoopGres, and you could then do it with Functions.

The whole colony of “academics” and theoreticians” in this field have been suppressing ACID and ACID Transactions, and the very notion of what a Transaction is (as evidenced in this post). No surprise, PooGress did not have Transactions. Over time, Since the enslaved user community screamed for Transactions, PoopGres eventually delivered bits of it in “transactional” Functions. But that was not honest Transactions, and it was certainly not SQL commands. What has happened now is, despite the pig poop feeders continuing to push for non-transactions, and continuing to suppress Transactions, the 10,000 poor bastards decided to implement ANSI/ISO/SQL, at least the pathetic bits that they could, with new layer upon non-working layer upon non-working layer, rather than a single integrated whole.

I expect that now, you will promote your “primitives” as bracketed by START TRAN and COMMIT TRAN.

But the problem is this. The “transactions” were never properly designed, because the entire colony of rats refused to acknowledge what Transactions have been in the real world for forty years (sixty for IBM/CICS/TCP). They have no understanding whatsoever. They have been suppressing Transactions since chief pig sucker C J Date wrote his infamous “Dropping ACID” paper. And great saint of OO/ORM idiocy Stonebraker played it down, and did not implement it in Ingres. And thus it did not exist in PongGres. (Aussie slang: pong means bad smell.)

Thus
1. The suite-of-programs does not approach the problem directly, squarely, and implement ACID Transactions properly, instead it has layers of legacy code with no integration
2. The user community that is directed by the filth of the rat colony has no understanding of what a Transaction is, what it should be, what is required to obtain ACID Transactions, before ACID can be obtained from the suite-of-programs.
3. Where this thread is concerned
-- First, as stated categorically “MVCC” as the foundation mindset (a) breaks the principle of a database, and (b) promotes the fiction of private obsolete “snapshots”, which is schizophrenic (denial of Reality). Note the massive resource demand for creating and maintaining the obsolete private versions, ,and for serialising them when COMMIT occurs.
-- Second, it will not work with an additional layer of Ordinary Locking (“regular lock mangler”). And it will not work with Yet Another additional layer of “predicate lock mangler”, which needs synchronisation (your “2PL” ?). Note therefore the hilarious mantra is totally false.
-- Contrast that with Sybase & MSSQL, which has Ordinary Locking as the foundation layer, and “snapshot isolation” is provided as an addition layer, with separate configurable resources, fully integrated.

4. Now, in addition to [1][2], the concept of “transactions” in that community is even worse. It is in fact any bunch of sub-Atomic fragments connected by a piece of string (think: the membrane that connects pieces of pig poop together to make a pile), one end of which is established by START TRAN and the other end by COMMIT TRAN.

5. The “interleaving of operations” that might be possible in “2PL” is pure fantasy, a pipe dream. It can only happen under [4], *AND* where the transaction code segment is not contiguous (whereas your “primitives” are), *AND* where they have added (it was not possible before the current release) START TRAN and COMMIT TRAN in order to obtain “transactional” from the new release. Whereas they really should be rewriting the “transaction” as a single contiguous code segment, bracketed by said elements, with code for the new conditions, and place it in a Function.

6. It is fantasy because the concurrency control model in PongGres is not the “2PL” that you say “interleaving of operations” might be more efficient in.
-- Separately, in commercial platforms, most developers know what they have to do to obtain ACID Transactions, and only the sub-normal types have strings of fragments spread across the network, guaranteed to get entangled.

----

This is what happens when the colony of “academics’ and “theoreticians” re-invent the wheel from scratch, in schizophrenic denial of reality, the real world, the SQL platforms that we have had since 1980.

This is how the rat colony have been subverting and sabotaging database science since Dr E F Codd wrote his //Relational Model// in 1970. It has been a continuous, voluminous, heavily marketed action.

This is how the pig suckers have degraded science since they won the war in 1911.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 21, 2021, 12:35:29 AM6/21/21
to
The following thread is relevant to this one. It provides a discussion in an ACID Transaction context, which MVCC does not have, and cannot do (MVCC is Anti-ACID; MVCC is Anti-Transaction).
__ https://groups.google.com/g/comp.databases.theory/c/LSoYp9wrv0M

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 23, 2021, 8:15:43 AM8/23/21
to
Nicola

In the /Stored Proc for OLTP Transactions/ thread ...

> On Saturday, 3 July 2021 at 20:28:55 UTC+10, Nicola wrote:
>
> Whether it makes sense to code that on in a similar way is still unclear
> to me. It's a one page procedure performing several lookups just to
> insert a record, a step which in PostgreSQL would conflict only with
> a similar insert—and the duration of the conflict would be short. Also,
> one purpose of your template is to avoid deadlocks, which in MVCC cannot
> happen anyway. In fact, adding explicit locking might introduce the
> possibility of deadlocks.

> in MVCC [deadlocks] cannot happen anyway

Separate to the "serialisation failures", which are the rough equivalent to deadlocks in a SQL Platform (with a Lock Manager), if deadlocks cannot happen, what is this, that people are experiencing problems with in PusGross, that they are calling deadlocks ?
__ https://dba.stackexchange.com/q/281846/64114
__ https://dba.stackexchange.com/q/151813/64114

Cheers
Derek

Nicola

unread,
Aug 23, 2021, 1:24:58 PM8/23/21
to
Derek,
long post this time. Please skim through it entirely before commenting.
Ok, I stand corrected, in two ways. First:

>and the duration of the conflict would be short

Strictly speaking, that is inaccurate, because it does not mention that
in case of write-write conflicts, one of the two transactions must be
rolled back.

Second, yes, deadlocks are possible, but they only involve write
operations (read statements do not acquire locks).

Example 1 (assume SERIALIZABLE):

create table account (name varchar(5) primary key, balance int not null);
insert into account(name,balance) values ('Tom', 40), ('Sam', 50);

Wall time| T1 | T2
---------|-------------------------------|------------------------------
(0) | begin transaction; |
| |
(1) | | begin transaction;
| |
(2) | update account |
| set balance = balance - 10 |
| where name = 'Sam'; |
(3) | | update account
| | set balance = balance + 20
| | where name = 'Tom';
(4) | update account |
| set balance = balance - 10 |
| where name = 'Tom'; |
| -- T1 is put on hold |
(5) | | update account
| | set balance = balance + 20
| | where name = 'Sam';
-- ERROR: deadlock detected

With row-level locking, or if the two records were on different pages,
Sybase would behave the same. But when read statements are involved,
things may run differently.

> if deadlocks cannot happen, what is this, that people are experiencing

Ok, they are deadlocks, as above. Could also be "serialization errors",
discussed below.

Example 2 (assume SERIALIZABLE):

Wall time T1 T2
---------|-------------------------------|------------------------------
(0) | begin transaction; |
| |
(1) | | begin transaction;
| |
(2) | select * |
| from account |
| where name = 'Tom'; |
(3) | | update account
| | set balance = balance + 20
| | where name = 'Tom';
(4) | update account |
| set balance = balance - 10 |
| where name = 'Tom'; |
| |
(5) | | commit;
| |
(6) | commit; |

In Sybase: at time (1), T1 locks the page containing the record, so that
at time (3) T2's request for a write lock is put into a queue and T2 is
put to wait. Sybase records T2->T1 (T2 depends on T1 to release a lock).
At time (4), T1's request for a write-lock is also denied, pending T2's
request (w/w conflict). Sybase records T1->T2 (T1 depends on T2 to
release a lock). The concurrency control system detects a cycle in the
dependency graph, and rolls back a transaction, which happens to be T1.
T2 can now commit. Note that without the select statement at time (2)
there would be no deadlock: the update at (4) would only be delayed
until T2 commits (or rolls back).

Do you agree with the above description of events?

In PostgreSQL: each transaction operates on its own private snapshot of
the database (that incurs a overhead, which I am not discussing for
simplicity). In the sequence above, that would result in a lost update
(namely, T2's update), unless some mechanism were in place to prevent
that (and there is). AFAICT, PostgreSQL implements a "first-updater
wins" rule: if a transaction attempts an update on an item that has been
updated by a concurrent transaction, if the latter has already
committed, the former immediately aborts; if the latter is still active,
the former is put on hold. This rule may be implemented with locks or in
other ways. I think that PostgreSQL uses a mechanism different from
locks for that, but that is not relevant in this context.

Anyway, in Example 2 it is not correct to talk about deadlocks in
PostgreSQL, because the read statements do not acquire any locks, so
it's not possible for the two transactions to be waiting upon each
other. The update at time (3) proceeds normally (such update is local to
T2). At time (4), though, T1 must be put on hold, because, according to
the rule above, another transaction has concurrently updated the same
record and is stil active. When T2 commits, PostgreSQL realises that T1
cannot sensibly continue (because that would result in a lost update),
and kills it. Now, T2 can commit.

Some remarks (recall that I am assuming SERIALIZABLE):

1. The outcome, in this specific example, is the same (Tom's balance
becomes 60), but the underlying mechanisms to achieve that outcome are
profoundly different. In general, the outcome will be different as
well (read further).

2. Can there be a concurrent interleaving of operations that leads to
a deadlock in Sybase, but the same concurrent interleaving does not
yield a deadlock or a serialization error in PostgreSQL? No, because
a concurrent interleaving of operations leading to a deadlock is an
incorrect interleaving. So, if Sybase "encounters a deadlock
situation", then PostgreSQL must also produce an error in that same
situation, otherwise it would output an incorrect result. The
difference is that PostgreSQL may output a "serialization error",
rather than a deadlock error.

> Separate to the "serialisation failures", which are the rough
> equivalent to deadlocks in a SQL Platform (with a Lock Manager)

Ok, accepted.

3. Does the difference matter to the DBA? Probably not: what matters in
practice is that a transaction was rolled back and must be retried.

4. Is there a concurrent interleaving of operations that makes PostgreSQL
return a "serializable error", but the same interleaving succeeds in
Sybase? Yes, because MVCC reduces the number of acceptable
interleaving of operations. For instance, remove the select statement
from T1 (but keep the same timing for the other operations): then in
Sybase both T1 and T2 will commit, but PostgreSQL will still kill T1
as soon as T2 commits (as per the above rule).

5. Does this difference matter to the DBA? Probably yes: under the same
workload, PostgreSQL might rollback more transactions. According to
the posts you cite, that appears to be the case (note that I am not
discussing other important details, such as updating indexes, or the
overhead of versions—I am keeping it simple). The more write-heavy the
concurrent workload, the more PostgreSQL is penalized compared to
a locking-based system. This seems what Uber has discovered the hard
way.

6. I am perfectly aware that you would not write T1 the way I have done,
and that you would avoid deadlocks. That's not the point of this post.
T1 is written that way for explanatory purposes only.

So, PostgreSQL might have an edge, if any, in prevalently read-only
workflows, because reads do not acquire locks. But you would perform
reads in Sybase at READ COMMITTED, where locks are held for a very short
time (it's still an overhead, though). So, it's not clear to me who the
winner would be. How do you justify preferring Sybase (locking) over
PostgreSQL (MVCC) in a context of almost exclusively read-only
transactions? Note that if transactions are read-only, versions are not
created.

Does this assessment satisfy you?

Nicola

Derek Ignatius Asirvadem

unread,
Aug 24, 2021, 8:48:35 AM8/24/21
to
Nicola

> On Tuesday, 24 August 2021 at 03:24:58 UTC+10, Nicola wrote:
> > On 2021-08-23, Derek Ignatius Asirvadem wrote:
> > > On Saturday, 3 July 2021 at 20:28:55 UTC+10, Nicola wrote:

> long post this time. Please skim through it entirely before commenting.

Sure. With the same implied intent, I will give my responses in a ordered sequence, different from yours, in order to make sense, and avoid jumping back-and-forth. That does not imply that your order of points is incorrect.

I will give short responses where it is appropriate to maintain flow, with full explanations at some chosen point. Hopefully that will allow you to make short responses as you see fit.

> > In the /Stored Proc for OLTP Transactions/ thread ...
> >
> >> Whether it makes sense to code that on in a similar way is still unclear
> >> to me. It's a one page procedure performing several lookups just to
> >> insert a record, a step which in PostgreSQL would conflict only with
> >> a similar insert—and the duration of the conflict would be short. Also,
> >> one purpose of your template is to avoid deadlocks, which in MVCC cannot
> >> happen anyway. In fact, adding explicit locking might introduce the
> >> possibility of deadlocks.
> >
> >> in MVCC [deadlocks] cannot happen anyway
> >
> > Separate to the "serialisation failures", which are the rough
> > equivalent to deadlocks in a SQL Platform (with a Lock Manager), if
> > deadlocks cannot happen, what is this, that people are experiencing
> > problems with in PusGross, that they are calling deadlocks ?
> > __ https://dba.stackexchange.com/q/281846/64114
> > __ https://dba.stackexchange.com/q/151813/64114
>
> Ok, I stand corrected, in two ways. First:
>
> >and the duration of the conflict would be short
>
> Strictly speaking, that is inaccurate, because it does not mention that
> in case of write-write conflicts, one of the two transactions must be
> rolled back.

Accepted.
But there is more. SQL Platforms do not rollback in many situations that PoopDePooGres (and Oracle) poops itself and rolls back.

> Second, yes, deadlocks are possible, but they only involve write
> operations (read statements do not acquire locks).

Accepted.
But the qualifier is irrelevant (as you have implied, whatever one or the other does is internal, and therefore irrelevant, but the result of submitting the same SQL [non-SQL for PoopGres] should be; is expected to be, the same).

Until we get to your summary at the end, wherein the internal operation become relevant.

> 6. I am perfectly aware that you would not write T1 the way I have done,
> and that you would avoid deadlocks. That's not the point of this post.
> T1 is written that way for explanatory purposes only.

Yes, of course. Accepted.

> (note that I am not
> discussing other important details, such as updating indexes, or the
> overhead of versions—I am keeping it simple)

Yes, of course. Accepted.

> In PostgreSQL: each transaction operates on its own private snapshot of
> the database (that incurs a overhead, which I am not discussing for
> simplicity).

Yes, of course. Accepted.

> > citations ...
> examples you cite ...
> Example 1 ...
> Example 2 ...

Ok. But note this exchange is academic, not real, because on the Sybase side the operations happen at millisec speed, and are rarely even detected, let alone observed (via sp_lock or sp_who)
__ vs
on the PoopDeExtrude side the operations happen at 1/10 second speed, and often, such that it is detected, and observed, and cited, as per links.

> Example 1 ...
> With row-level locking, or if the two records were on different pages,
> Sybase would behave the same.

Come on, mate. As evidenced, you know nothing about SQL Platforms, let alone Sybase. You are not in a position to say what an SQL Platform, or what Sybase does, or does not do.

>>>>
I have already explained, several times, that this is a Straw Man that prevents understanding of an Ordinary Lock Manager.

It is in my Transaction Sanity doc, which you have accepted (“How could I dispute facts?”).

Now you have regressed.

I am not saying you are dishonest, I am saying you are indoctrinated, and propose such dishonest proposals unconsciously. Therefore you must *consciously* choose to erase the filth; the fantasy; the Straw Man, and be *consciously* willing to learn an altogether new thing, in the real world: an Ordinary Lock Manager in an SQL Platform from 1984.
<<<<

All you have done is, apply stupid MV-non-CC+2PL mindset onto the Sybase side, where there is no MV-non-CC; no 2PL. Short answer: no, SQL Platforms such as Sybase definitely do not do that. Explanations later.

> But when read statements are involved,
> things may run differently.
>
> > if deadlocks cannot happen, what is this, that people are experiencing
>
> Ok, they are deadlocks, as above. Could also be "serialization errors",
> discussed below.

1. I think we agree: in PoopGres, where it is reported as (a) rollbacks, or (b) “deadlocks”, or (c) “serialisation failures”, it is one logical category: the effects that should not happen, but that do happen.

Extensively, and commonly.

2. As distinct from a true deadlock, which is the result of poor SQL [or non-SQL] coding *ONLY*, not a result of internal server resource conflicts or incompetent coding in the “server”.

3. In SQL Platforms such as Sybase, [1] does not, cannot, happen. [2] happens only when the SQL code is sub-standard (the central theme in the /Stored proc for OLTP/ thread).

4. In MV-non-CC+2PL systems, [1] happens, and happens quite often. You have provided details of an abstract case, the links cited provide details of real cases. There are thousands of such reports.

Further, I am stating categorically, that it is no use writing Standard OLTP code in such primitive systems, because ACID is not possible, Transactions are not properly supported (a weird form is *always* in operation), and Standard Transactions are based on ACID. (Again, the central theme in the /Stored proc for OLTP/ thread), and as per detailed resolution provided in my Transaction Sanity doc.)

Users in such degenerate systems are reduced to writing fart concerts involving low-level primitives to secure locking, proving in each instance that “MVCC” and the devil-worshipping Mantra is false, engaging the 2PL, and thus interfering with teh locking, and thus legs wide open to new [true] deadlocks.

> Do you agree with the above description of events?

Definitely not. You are proposing the Straw Man, that Sybase operates as MV-non-CC+2PL does. It does not. And the results are drastically different.

----
Now for the detail.

----
> Example 1 (assume SERIALIZABLE):
> ...
>
> With row-level locking, or if the two records were on different pages,
> Sybase would behave the same.

Generally, yes. With the following caveats:

1. As explained above, the example is academic, not real.
2. As an academic example, yes, it is a [true] deadlock.
3. In real life, the two UPDATES bracketed by BEGIN-COMMIT TRAN in Connection[T1] are Atomic, in the same contiguous code block, and thus will execute at Time[0].
4. There is no “interleaving” of Atomic operations.
__ (There are such violent contradictions [breaking up the Atom, and then re-constituting the fragments; and then agonising over the inability to do so] in MV-non-CC+2PL systems. Those who are not schizophrenic do not break up the Atom in the first place.)
5. Connection[T1] COMMITs at Time[0]
6. Connection[T2] BEGINs; performs two UPDATES; and COMMITs at Time[1]

You are not getting it.

>>>>
Sorry, I cannot give you the whole lecture from our course, you have to try and understand this from my textual descriptions. Note that many senior SQL Platforms consultants do not understand this, it is a deep understanding of OLTP, confirmed every time, in scores of contention resolution assignments. I am privileged because I knew this from the 1970’s when I was a DBA using Cincom/TOTAL NDBMS, and then a software engineer for Cincom. Competing head-on with IBM/IMS; DEC/RDb; Cullinane; Britton-Lee; etc.

Yes, of course, number of locks; types of locks; maturity of the Lock Manager (good; bad; multi-level; escalation; promotion; etc), are all relevant, but nowhere near as relevant as ...
---------------------------------
-- LOCK DURATION --
---------------------------------

I offer getting rid of deadlocks [of that nature, due to lock duration as opposed to poor code], without changing either the database or the app code, as a fixed price service. If you are interested, I can provide proof. But I suspect you won’t read it: you don’t read the links that I provide.
<<<<

So, Connection[T1] is in and out in millisecs, long before Connection[T2] submits its SQL, which also executes in millisecs.

But yes, in pathetic herds of programs taht execute non-SQL, where the operations execute in tenths of seconds or seconds, your scenario can, and does, happen. Frequently.

Re Example [1]. In sum, there is no comparison, and we do not obtain the same result: conditions that {deadlock; rollback; serialisation fail} in MV-non-CC+2PL systems as you describe, do happen in such systems, but simply do not happen in Sybase, for the real world reasons I have given.

----
> But when read statements are involved,
> things may run differently.
>
> > if deadlocks cannot happen, what is this, that people are experiencing
>
> Ok, they are deadlocks, as above. Could also be "serialization errors",
> discussed below.

Yes, PoopAndPeeGross has {Rollbacks; deadlocks; and serialisation failures}, in one category of things that should not happen in “MVCC” systems, (a) but that do happen, frequently, AND (b) that do not happen in SQL Platforms such as Sybase; MS; DB2; Informix.

----
> Example 2 (assume SERIALIZABLE):
> ...
>
> In Sybase: at time (1), T1 locks the page containing the record, so that
> at time (3) T2's request for a write lock is put into a queue and T2 is
> put to wait. Sybase records T2->T1 (T2 depends on T1 to release a lock).
> At time (4), T1's request for a write-lock is also denied, pending T2's
> request (w/w conflict). Sybase records T1->T2 (T1 depends on T2 to
> release a lock). The concurrency control system detects a cycle in the
> dependency graph, and rolls back a transaction, which happens to be T1.
> T2 can now commit. Note that without the select statement at time (2)
> there would be no deadlock: the update at (4) would only be delayed
> until T2 commits (or rolls back).
>
> Do you agree with the above description of events?

I agree that your description is correct for the MV-non-CC+2PL side.

I reject your description for the SQL Platform such as Sybase side, because as explained above, and many times in other threads, you are simply [Straw Man] applying MV-non-CC+2PL mindset to the SQL Platform side that does not operate ala the MV-non-CC+2PL anti-method. Like a prostitute who has syphilis, attempting to tell a virgin that she too has syphilis. This is a consistent pattern in our discussions.

So I have to give you SQL Platform operation, yet again. Please proceed with your planned benchmarks on Sybase vs PisteGres, so that you obtain some experience in the real world systems [not materialised fantasy] that we have had for forty years. Feel free to read the manuals and the Sybase docs that I linked. Feel free to ask specific Sybase questions in the /Questions about Sybase/ thread. No discussion, answers only.

---------------------
-- Operation --
---------------------

First, we need to understand the operation of an SQL Platform.

> T1 locks the page containing the record, so that
> at time (3) T2's request for a write lock is put into a queue and T2 is put to wait.

1. [T1] and [T2} do not do anything (other than submit SQL to the server). This is not a pedantic correction. It is important that you understand the vast difference between
__ MV-non-CC: a herd of programs competing with each other (each connection causes a new program instance on the “server” machine) wherein your terminology is correct
____ https://www.softwaregems.com.au/Documents/Article/Oracle%20Circus/Oracle%20vs%20Sybase.pdf
____ page 2 Process Non-Architecture LHS
____ replace [Oracle] with [PukeGross]

__ vs

__ SQL Platform: a single process on the server machine, with say 4 execution threads, that *serve* say 200 connections, wherein the server maintains 1 internal *context* per connection (plus 1 internal context per Worker). Nothing in the Sybase server is affected by a connection. The server maintain all internal data structures itself, and makes its own scheduling decisions (*context switches*) based on all the activity at any given moment.
____ So [T1][T2] are Connections, not threads; not processes, as such, they can do nothing re server internal resources.
____ https://www.softwaregems.com.au/Documents/Article/Sybase%20Architecture/Sybase%20ASE%20Architecture.pdf
____ page 10 Threaded Execution, Task Context, per Connection (bottom right)
____ DNPS = default network packet size, usually 512 bytes
______ one incoming; one outgoing; and one being built
____ STLC = session tempdb log cache, usually 2K
______ tempdb is a real db for all temporary tables (#MyTable), that can be configured and optimised, same as a normal db
____ Stack size = usually 64K
____ SG = stack guard size, usually 8K
____ the LockChains are depicted below the Lock Manager function

There is no queue re locks (there is a high-performance method for scheduling, which for simplicity sake, can be perceived as a queue, but that is a separate thing).

There is a *lock-chain*. Locks are held on resources, not on Connections.
__ The MV-non-CC+PL notion of separate programs (1 per connection) holding locks, which are not on the resource but the connection, is stupid. Heavily indoctrinated stupid. Of course, that level of stupid can’t be helped, because it is based on the schizophrenic notion that the resource is a stale offline version, that does not actually exist in reality at that moment, that will magically exist if and when it COMMITs or it rolled back.

2. If we are to resolve this, we need to be more precise and identify the *type* of locks (which is where the conflicts or not will be u(a) understood, and (b) resolved. Which is why I provide a *Resolution* table in my Sybase Lock Manager doc. Or else read the Sybase manuals.

3. I have tried to explain that it is not locking, not number of locks, not even lock types, that are relevant, but lock duration. I appreciate that it is LEVELS of technical advancement beyond MV-non-CC+2PL, and quite foreign to that mindset. But it must be asserted. Because it is a real world fact.
__ EoT means End of Transaction, a COMMIT

-------------------------------------------
-- Example 2 Sybase Side --
-------------------------------------------

To maintain a viable exchange, here I am not asserting the spee/slowness issue, which would wipe out the problem, I am maintaining the academic example, for understanding.

> In Sybase: at time (1), T1 locks the page containing the record, so that
> at time (3) T2's request for a write lock is put into a queue and T2 is
> put to wait. Sybase records T2->T1 (T2 depends on T1 to release a lock).
> At time (4), T1's request for a write-lock is also denied, pending T2's
> request (w/w conflict).

Let’s say Row[Tom] is on Page[Px].
__ at Time[2]
____ Connection[T1] obtains a Shared-Page-Lock on Page[Px], until EoT
__ at Time[3]
____ Connection[T2] obtains an Update[Intent]-Page-Lock on Page[Px] (which does not escalate to Exclusive), and the lock is chained onto Page[Px], behind Connection[T1]’s lock
____ Connection[T2] is in TaskStatus *sleep*, in LockStatus *lock-wait*

> At time (4), T1's request for a write-lock is also denied, pending T2's
request (w/w conflict).

Definitely not. Thinking in terms of “w/w conflict” is primitive, idiotic [not you but MV-non-CC+2PL you are indoctrinated in], and has no bearing on a real Ordinary Lock Manager. Erase that filth from your mind, because it is fifty years behind the technology, and try to understand a real Lock Manager. You will not be able to understand a real Lock Manager while holding onto the MV-non-CC+2Pl mindset.

We do not have “w/w conflicts”, or any similar or equivalent thing. We do not sing the Stonefreaker Mantra. Only knuckle-dragging imbeciles have such primitive things, to resolve the conflict they sing that they do not have, but as evidenced, they actually have. And yes, the freeware world is full of them.

We wait on a resource, access to which is blocked.

For further detail, refer to the P&T Locking manual, or my Sybase Lock Manager doc, page 2, Resolution table:
____ https://www.softwaregems.com.au/Documents/Article/Sybase%20Lock%20Manager/Sybase%20Lock%20Manager.pdf

__ at Time[4]
____ Connection[T1] lock on Page[Px] is escalated [change of lock-type] to Update[Intent]-Page-Lock (succeeds), and thence to Exclusive-Page-Lock (succeeds)
____ Connection[T1] UPDATE, which requires Exclusive-Page-Lock on Page[Px], proceeds
__ at Time[5]
____ Connection[T2] COMMIT cannot occur, because the little fellow is asleep in *lock-wait*
__ at Time[6]
____ Connection[T1] COMMITS, and its locks are released
____ Connection[T2] lock is now at the head of the LockChain on Page[Px]
__ at Time[7]
____ the scheduler will find Connection[T2] is no longer in lock-wait, context-switch it in, and execute it in a thread
____ Connection[T2] Update[Intent]-Page-Lock on Page[Px] is escalated to Exclusive-Page-Lock (succeeds)
____ Connection[T2] UPDATE which requires Exclusive-Page-Lock on Page[Px] proceeds
____ Connection[T2] COMMIT proceeds, which releases it locks

> Sybase records T1->T2 (T1 depends on T2 to
release a lock). The concurrency control system detects a cycle in the
dependency graph,

We do not have a “concurrency control system” which is a MV-non-CC+2PL artefact. Straw Man again. We have a regular Lock Manager.

We do not have “dependency graph” or the like. That is only required for imbeciles that can only think in terms of single-threaded processing. Absolute droolers.

There is no “cycle”, with or without the “dependency graph” to determine a “cycle” from. We do have LockChains, that are already ordered per resource; determination of blocks (not “conflicts”) is pedestrian; determination of deadlocks is pedestrian. Everything in the universe exists in a natural hierarchy, the LockChains are a perfect hierarchy. Determination is not an operation, it is already defined [exists] in the data structure.

In this case, the locks have been resolved. Both Connections[T1][T2] have completed without event.

> and rolls back a transaction, which happens to be T1.
> T2 can now commit. Note that without the select statement at time (2)
> there would be no deadlock: the update at (4) would only be delayed
> until T2 commits (or rolls back).

No. As explained in detail above. There is no deadlock. Both Connections[T1][T2] succeed.

--------------------------------------------------
-- Example Pathetic Operation --
--------------------------------------------------

1. Note that you have given a good example for an MV-non-CC+2PL scenario, not an SQL Platform with Ordinary Locking since 1984.
2. Note that it does deadlock.
3. Note that that is why we can say, categorically, that MV-non-CC+2PL program herds:
__ a. lock themselves silly, and
__ b. produce far more deadlocks (that they say cannot happen) than SQL Platforms with Ordinary Locking since 1984.
4. If you understand the abject hysteria of multiple offline stale versions of rows, you will understand that statement [3] more deeply.

----
> In PostgreSQL: each transaction operates on its own private snapshot of
> the database (that incurs a overhead, which I am not discussing for
> simplicity). In the sequence above, that would result in a lost update
> (namely, T2's update),

Yes.

> unless some mechanism were in place to prevent
> that (and there is).

As detailed in my Transaction Sanity doc.

> AFAICT, PostgreSQL implements a "first-updater
> wins" rule: if a transaction attempts an update on an item that has been
> updated by a concurrent transaction, if the latter has already
> committed, the former immediately aborts; if the latter is still active,
> the former is put on hold. This rule may be implemented with locks or in
> other ways. I think that PostgreSQL uses a mechanism different from
> locks for that, but that is not relevant in this context.

Check Predicate Locks.

FYI
When Sybase detects a [true] deadlock (false deadlocks such as in PissGress cannot happen):
a. it retries 5 times (can be configured),
b. rolls back the connection that has the lower cpu usage
__ intending that the connection that has done more work and holds more locks proceeds to completion, and thus releases more locks for all
__ whereas rolling back the connection with more cpu usage would result in more locks being released immediately, but more work overall subsequently
c. alerts the client with Error 1205, so that it can re-submit the Transaction (execute sproc)

----
> Anyway, in Example 2 it is not correct to talk about deadlocks in
> PostgreSQL, because the read statements do not acquire any locks, so
> it's not possible for the two transactions to be waiting upon each
> other. The update at time (3) proceeds normally (such update is local to
> T2). At time (4), though, T1 must be put on hold, because, according to
> the rule above, another transaction has concurrently updated the same
> record and is stil active. When T2 commits, PostgreSQL realises that T1
> cannot sensibly continue (because that would result in a lost update),
> and kills it. Now, T2 can commit.

1. Repeating:
>>
I think we agree: in PoopGres, where it is reported as (a) rollbacks, or (b) “deadlocks”, or (c) “serialisation failures”, it is one logical category: the effects that should not happen, but that do happen.
<<
Let’s call that category FalseDeadlocks, because even the [b] reported deadlocks are false, not caused by the app code, but internal.

2.
> it's not possible for the two transactions to be waiting upon each
> other

Yes, if both are reading only.
No, in the real world, they are not Transactions (SELECT does not require a Transaction)
Yes, in the fantasy of “MVCC”, it cannot happen.
No, in the materialised fantasy of MV-non-CC+2PL, the weird and contradicted realm of the asylum, it does happen.

3. “Kill”.
I hope you mean rolled back. If PoopGres kills a task due to internal incapacity and incontinence, or due to a true deadlock, that is stupid and wrong. It should roll the task back and allow it continued execution without interference.

----
> Some remarks (recall that I am assuming SERIALIZABLE):
>
> 1. The outcome, in this specific example, is the same (Tom's balance
> becomes 60), but the underlying mechanisms to achieve that outcome are
> profoundly different. In general, the outcome will be different as
> well (read further).

I disagree, as per explanations above.

> 2. Can there be a concurrent interleaving of operations that leads to
> a deadlock in Sybase, but the same concurrent interleaving does not
> yield a deadlock or a serialization error in PostgreSQL? No, because
> a concurrent interleaving of operations leading to a deadlock is an
> incorrect interleaving. So, if Sybase "encounters a deadlock
> situation", then PostgreSQL must also produce an error in that same
> situation, otherwise it would output an incorrect result. The
> difference is that PostgreSQL may output a "serialization error",
> rather than a deadlock error.

Definitely not. As explained above, so I will not repeat.

Additionally ...

The notion of “interleaved operations” is plain stupid, self-contradictory. The Transaction is Atomic. When PissedGres breaks it up, such that interleaving is possible, it destroys the Atomicity that it is required to preserve. And then fails; fails; fails in re-constituting the Atom that it should not have broken up. Therefore it has FalseDeadlocks (internal inability to resolve itself), in addition to true deadlocks (caused by app code). You have provided a great example in academic terms, the links cited provide evidence.

Sybase has no “interleaved operations”, it preserves the Atom. Sybase has no internal deadlocks, all deadlocks are true (coding in the app).

The two are totally different in both operation and on resource management, separate to the fact that PukeGres is materialised fantasy and separate to the fact that it performs like an elephant in the Alps. The results are not the same, and they cannot be expected to be the same. You are providing an academic scenario, and yes, we can discuss on that basis, but no, we cannot have resulting expectations on that basis. This is the same insanity as in the /Stored Proc for OLTP Transactions/, wherein you cannot separate theoretical operation from implementation operation.

Think about this. No one in their right mind would expect the result in Sybase and the result in Oracle, to be the same. Sure, theoretically, they *SHOULD BE* the same. So what. It isn’t real. in the real world, no one in their right mind would expect the same result from a race horse as they would from a blind elephant.

> > Separate to the "serialisation failures", which are the rough
> > equivalent to deadlocks in a SQL Platform (with a Lock Manager)
>
> Ok, accepted.

So now you are flipping, yet again, and you accept this:
>>
I think we agree: in PoopGres, where it is reported as (a) rollbacks, or (b) “deadlocks”, or (c) “serialisation failures”, it is one logical category: the effects that should not happen, but that do happen.
<<

----
> 3. Does the difference matter to the DBA? Probably not: what matters in
> practice is that a transaction was rolled back and must be retried.

False.

First, it happens often, far more than the heavy marketing admits, more precisely, than it declares “can never happen”.

Second, when the users chase the developers, the developers chase the DBAs, therefore the DBAs have to deal with it. Due to ignorance re the causative problem, they fiddle and fart around with the effects, changing the data structures that contain the tables; changing the few parameters that can be changed

Third, the DBAs will impress upon the developers that rollbacks and deadlocks and serialisation failures are caused by code, that there are many low-level non-SQL ExtraPoopGress methods to reduce such occurrences [but create new deadlocks], and thus the developers are forever changing code. This in additional to the normal forever-changing-code because PoopDePlopGres is forever changing with each new release, demanding code changes.

The horrendous and hilarious problem when perceived by those who use a SQL Platform, which is (a) SQL compliant, and (b) backward compatible, wherein we do not even think of changing code, and we do not have low-level primitives to mess with the internal lock (Lock Manager or 2PL).

----
> 4. Is there a concurrent interleaving of operations that makes PostgreSQL
> return a "serializable error", but the same interleaving succeeds in
> Sybase? Yes, because MVCC reduces the number of acceptable
> interleaving of operations. For instance, remove the select statement
> from T1 (but keep the same timing for the other operations): then in
> Sybase both T1 and T2 will commit, but PostgreSQL will still kill T1
> as soon as T2 commits (as per the above rule).

Your result answer is correct. But not for those reasons. I have explained in detail the real reasons.

----
> 5. Does this difference matter to the DBA? Probably yes: under the same
> workload, PostgreSQL might rollback

And false-deadlock.
And serialisation fail.
All of which is marketed as “not possible”, fraudulently.

> more transactions. According to
> the posts you cite, that appears to be the case (note that I am not
> discussing other important details, such as updating indexes, or the
> overhead of versions—I am keeping it simple).

Of course.

> The more write-heavy the
> concurrent workload, the more PostgreSQL is penalized

Not penalised (which implies an external authority), buts pig-poops itself, vomits and then eats its own vomit, because the entire problem is self-created, and they sing the Mantra loudly, while operating contra to it, in a feverish attempt to resolve the unresolvable, and it finally expies, but produces a false message implyint that the non-SQL code has a problem.

> compared to
> a locking-based system. This seems what Uber has discovered the hard
> way.

They experienced the fraud that was perpetrated on them, that the marketing is false; the Mantra is false, that the reality is PoopDePooGress poops itself, that OLTP Is simply not possible, even after low-level non-SQL code changes.

----
> The more write-heavy the concurrent workload, the more PostgreSQL is penalized
> compared to a locking-based system.

That is the most important thing that you have said today, perhaps all year. Congratulations.

Now you are *starting* to realise what I have stated from ten years ago, and as detailed in the last two months in several threads on c_d_t, and as summarised in my Transaction Sanity doc. Which you have been arguing against (up to now), stating the opposite, that MV-non-CC systems are better suited to OLTP blah blah.

Evidenced fact is, contrary to the academic postulations and hundreds of academic papers (filth), PoopGres is not even remotely capable of OLTP.

Congratulations are in order, because you have now reached this realisation via an academic path. I await your paper that declares it. You will have to declare a set of definitions for Standard terms right after the abstract.

----
> 6. I am perfectly aware that you would not write T1 the way I have done,
> and that you would avoid deadlocks. That's not the point of this post.
> T1 is written that way for explanatory purposes only.

Of course.

----
> So, PostgreSQL might have an edge, if any, in prevalently read-only
> workflows, because reads do not acquire locks.

Academically, hypothetically, yes.
In reality, No.
Because a herd of 100 or 100 un-architected processes written by a bunch of ignorant rabble that believes in fantasy, that fight over one set of resources (no versions for read-only), perform 2 or 3 orders of magnitude slower than a single process that is beautifully architected by PhD level engineers with hundreds of patents, and has forty years of maturity.

> But you would perform
> reads in Sybase at READ COMMITTED, where locks are held for a very short
> time (it's still an overhead, though).

Yes.
At millisecond speeds.
5 millisecs plus the overhead of 0.1 millisecs is still 5 millisecs.

> So, it's not clear to me who the
> winner would be.

Well, you are edging slowly towards reality. Next task is your benchmark, wherein you will obtain hard, cold evidence. But if you stop singing hymns that are false, and you inspect the operations of actual Sybase (instead of projecting your Straw Man MV-non-CC+2PL ideology onto it), and you study the Architecture docs I have given (as well as the manuals if you so desire), it is possible to form logical conclusions, such that you will not be paralysed with shock when you run the benchmarks.

Tony Andrews, a famous Oracle guru, tried the same nonsense with me, and we ran benchmarks. He never got over the results. Ten years on, he is still suffering from depression. Severe changes to the internal belief system, is not to be taken lightly.

> How do you justify preferring Sybase (locking) over
> PostgreSQL (MVCC) in a context of almost exclusively read-only
> transactions? Note that if transactions are read-only, versions are not
> created.

First, we do not have Transactions for read-only operations, thus you do have an overhead that we don’t have. I would not suggest that that is equal to the “added” locking we have for read-only operations, because it is minuscule in comparison.

Second, whatever you do produce as a result set is false wrt SQL compliance, whereas ours is true. A brain-dead series of offline stale uncommitted (you have no real implementation of COMMIT) versions of rows is not comparable to a series of true committed online rows at READ COMMITTED. Sure, a new version may not be *created*, but the version taken and used is false, just as when a version *IS* created.

Third, and most important, the problem is in your question. It is stated with a limited scope, that is yours, not mine. I do not justify Sybase over AsylumGres “in a context of almost exclusively read-only [operations]”.

I justify Sybase over AsylumGres for
- OLTP only;
- AND for mixed OLTP+OLAP,
- AND for OLAP only.
Because I prefer reality to materialised fantasy, in all contexts that can be related to the provision of an SQL platform, an SQL and ACID compliant architected server. Refer to the Architecture vs Non-Architecture performance links above.

> Does this assessment satisfy you?

Yes, it is the best I have received from an academic. No fuss; no nonsense; no idiotic argumentation; no dishonest Straw Men (that which is due to indoctrination is excused); no splitting hairs. You are engaging your lineage back to 1492. You are starting to heave (the harsh but non-issuing action that precedes vomiting) at Modernism. Progressively leaving the asylum, in the direction of science.

Cheers
Derek

Nicola

unread,
Aug 24, 2021, 12:36:37 PM8/24/21
to
Derek,
I run commands at human speed, because I can't understand full-speed
execution if I cannot understand snail-speed execution. I am planning
a full-speed benchmark, but bear with me for now.

On 2021-08-24, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:

>> Example 2 (assume SERIALIZABLE):

> 2. As an academic example, yes, it is a [true] deadlock.

Ok. More on that below.

> 3. In real life, the two UPDATES bracketed by BEGIN-COMMIT TRAN in
> Connection[T1] are Atomic, in the same contiguous code block, and thus
> will execute at Time[0]. 4. There is no “interleaving” of Atomic
> operations.

> The notion of “interleaved operations” is plain stupid,
> self-contradictory. The Transaction is Atomic.

> Sybase has no “interleaved operations”, it preserves the Atom. Sybase
> has no internal deadlocks, all deadlocks are true (coding in the app).

> You are not getting it.

I am starting to. In general, how is an Atomic contiguous code block
defined? Asked the other way round, what does break a contiguous code
block, allowing interleaving? Surely, BEGIN and COMMIT. Anything else
in between?

> as opposed to poor code], without changing either the database or the
> app code, as a fixed price service. If you are interested, I can
> provide proof.

Yes, please. I do read your links. I am not always understand their
dense content at once.

> -------------------------------------------
> -- Example 2 Sybase Side --
> -------------------------------------------

> Let’s say Row[Tom] is on Page[Px].
> __ at Time[2]
> ____ Connection[T1] obtains a Shared-Page-Lock on Page[Px], until EoT

Yes.

> __ at Time[3]
> ____ Connection[T2] obtains an Update[Intent]-Page-Lock on Page[Px]
> (which does not escalate to Exclusive), and the lock is chained onto
> Page[Px], behind Connection[T1]’s lock

Ok.

> ____ Connection[T2] is in TaskStatus *sleep*, in LockStatus *lock-wait*

So far, so good.

> __ at Time[4]
> ____ Connection[T1] lock on Page[Px] is escalated [change of
> lock-type] to Update[Intent]-Page-Lock (succeeds),

Ok.

> and thence to Exclusive-Page-Lock (succeeds)

With all due respect, are you sure? T2 already has an Update lock on
that page. T1 cannot acquire an Exclusive lock *on the same page*,
right? At least, this is my understanding from the compatibility table
at §1.5 in ASE's Locking and Concurrency Control guide.

>> [At time [4], Sybase] rolls back a transaction, which happens to be
>> T1. T2 can now commit.

> No. As explained in detail above. There is no deadlock. Both
> Connections[T1][T2] succeed.

Really? See above.

>> Anyway, in Example 2 it is not correct to talk about deadlocks in
>> PostgreSQL, because the read statements do not acquire any locks, so
>> it's not possible for the two transactions to be waiting upon each
>> other. The update at time (3) proceeds normally (such update is local to
>> T2). At time (4), though, T1 must be put on hold, because, according to
>> the rule above, another transaction has concurrently updated the same
>> record and is stil active. When T2 commits, PostgreSQL realises that T1
>> cannot sensibly continue (because that would result in a lost update),
>> and kills it. Now, T2 can commit.
>
> 1. Repeating:
>>>
> I think we agree: in PoopGres, where it is reported as (a) rollbacks,
> or (b) “deadlocks”, or (c) “serialisation failures”,

Only (b) and (c). (a) is a consequence of (a) or (b), not a separate
error.

> it is one logical category: the effects that should not happen, but
> that do happen.

Well, yes.

> Let’s call that category FalseDeadlocks, because even the [b] reported
> deadlocks are false, not caused by the app code, but internal.

It may be that PostgreSQL deadlocks in situations in which Sybase does
not. Not sure that Example 2 is such a case: waiting for your reply.

> 3. “Kill”.
> I hope you mean rolled back.

Yes.

>> 2. Can there be a concurrent interleaving of operations that leads to
>> a deadlock in Sybase, but the same concurrent interleaving does not
>> yield a deadlock or a serialization error in PostgreSQL? No, because
>> a concurrent interleaving of operations leading to a deadlock is an
>> incorrect interleaving. So, if Sybase "encounters a deadlock
>> situation", then PostgreSQL must also produce an error in that same
>> situation, otherwise it would output an incorrect result. The
>> difference is that PostgreSQL may output a "serialization error",
>> rather than a deadlock error.
>
> Definitely not. As explained above, so I will not repeat.

I don't think that what you have said (whatever Sybase does) contradicts
my point. My point starts from the assumption that clients concurrently
submit a set of transactions *and* Sybase returns 1205 to one of them
(say, badly coded applications). That means that Sybase had to roll back
a transaction to prevent incorrect behaviour.

Now, if the clients submit the same transactions to a PostgreSQL server,
*and* if the server schedules those transactions *the same way* as
Sybase did, then PostgreSQL *must* return an error to one of them.

So, the point is: PostgreSQL cannot make all the transactions commit
where Sybase has rolled back one, because Sybase did that to prevent an
incorrect execution. PostgreSQL would have a bug if it did not do the
same.

The vice versa, however, is not true (that was my other remark). There
are cases in which PostgreSQL rolls back some transaction, but Sybase,
under the same conditions, is able to commit all of them. This is one
such situation:

Example 3

Wall time | T1 | T2
----------|-------------------|------------------
(0) | begin |
(1) | | begin
(2) | | update Tom's data
(3) | update Tom's data |
(4) | | commit
(5) | commit |

While Sybase makes T1 wait until T2 commits and then commits T1,
PostgreSQL rolls back T1 as soon as T2 commits.

I agree with you that this is bad.

Nicola
Message has been deleted

Derek Ignatius Asirvadem

unread,
Aug 26, 2021, 9:25:49 AM8/26/21
to
Nicola

> On Wednesday, 25 August 2021 at 02:36:37 UTC+10, Nicola wrote:
> > On 2021-08-24, Derek Ignatius Asirvadem wrote:
> >
> > 3. In real life, the two UPDATES bracketed by BEGIN-COMMIT TRAN in
> > Connection[T1] are Atomic, in the same contiguous code block, and thus
> > will execute at Time[0].
> >
> > 4. There is no “interleaving” of Atomic operations.
> >
> > The notion of “interleaved operations” is plain stupid,
> > self-contradictory. The Transaction is Atomic.
> > Sybase has no “interleaved operations”, it preserves the Atom. Sybase
> > has no internal deadlocks, all deadlocks are true (coding in the app).
> > You are not getting it.
>
> I am starting to. In general, how is an Atomic contiguous code block
> defined? Asked the other way round, what does break a contiguous code
> block, allowing interleaving? Surely, BEGIN and COMMIT. Anything else
> in between?

I cannot accept that a teaching professor does not understand the established term CS /contiguous/. This is one of those basic things that one has to apprehend and deal with in the first few months of an undergrad CS course, when writing the first file-system or C-ISAM program.

Second, this is the same problem as you had when you could not tell the difference between a Transaction (everything between a BEGIN-COMMIT pair) and a Transaction stored proc that contains it.

Third, we yet again crash into the overarching problem, that the freaky academics have private definitions of established industry terms, or worse, re-definitions, such that you can commit scientific fraud.

1. The Oxford English Dictionary, because humans do not use a word in a particular context that is outside the original English meaning (except for the asylum, of course):
contiguous |kənˈtɪgjʊəs|
adjective
• sharing a common border; touching: the Southern Ocean is contiguous with the Atlantic.
• next or together in sequence. five hundred contiguous dictionary entries.

2. In CS, /contiguous/ is used to describe PHYSICALLY adjacent; touching; sequential, memory address spaces (accessed in blocks nowadays, not bytes), or disk blocks (read/write is block mode, not character mode).

3. When applied to code segments, it means exactly the same thing: that the code is PHYSICALLY continuous; touching; sequential.

Coders understand that when using a real language such as SQL (more, later), due to IF-THEN-ELSE structs and GOTOs, and to keep (eg) the error handling in one contiguous code block, the code may not be 100% contiguous, which is understandable, but the whole code between BEGIN and COMMIT/ROLLBACK must be [imperfectly] contiguous.

THIS GIVES THE TRANSACTION //CODE// ATOMICITY, THE [A] IN ACID

Example 1
Transaction Sanity doc, pages 3; 4; 7. Appreciating that it is pseudo-code, not code. Each blue rectangle is a contiguous code block.

Example 2
The latest version of the OLTP Transaction Template. Here it is real SQL code.
__ https://www.softwaregems.com.au/Documents/Article/Database/Transaction/Visit_Add_tr%20DA.sql
a. the entire content is contiguous
b. the content between BEGIN and COMMIT is contiguous

But note, that is a simple Template. In the real world implementation, as any developer would know, there would NOT be several
__ RAISERROR/ROLLBACK/RETURN
all over the shore, there would be a single block
__ ERROR_HANDLE:
which handles all errors and does a single
__ RAISERROR xyz
__ RETURN q
which allows the multiple RAISERROR/ROLLBACK/RETURNs in the code to be replaced with
__ GOTO ERROR_HANDLE

Hence the code in the stored proc, and particularly in the BEGIN-COMMIT/ROLLBACK sequence, is still said to be contiguous.

4. While SQL is a real language (a) as defined by Codd, (b) as defined by ANSI/IEEE/IEC/ISO, and (c) implemented as such [with additions to make it a full; complete; powerful language], in Sybase; DB2; MS; Informix, the pathetic crap that is provided by freeware and Oracle, is not a real language; not SQL, by any stretch of the imagination (ok, the asylum thinks it is, in denial of actual knowledge of SQL).
- Stored procs were introduced only in the last release, which mean only a pathetic fraction
- Constraints are implemented as internal triggers
- it is not SQL compliant at all (which binds the code to the freeware)
- process distribution passed off as “parallelism”
- masses of non-SQL and pretend-SQL, which keep changing with every major release (you guys love re-writes)
- the list is endless

----
> Asked the other way round, what does break a contiguous code
> block, allowing interleaving?

Nothing.

As detailed in this and other threads, in order to allow “interleaving”, one has to break up the Atom. Which renders the code; the Transaction NON-ATOMIC, which fails the [A]tomicity in ACID, and thus fails ACID.

Yet again, because it breaks up the Atom, into fragments, in order to allow “interleaving”, and it does “interleave”, in order to make the snail move like a tortoise, PoopGress simply does not support ACID. It is too stupid for a human being to even contemplate. And then you have the stupefying “schedule”, for single-threaded processing. And then the totally insane “serialisation conflicts” and the wringing of hands and the gnashing of teeth that goes with it, straight to hell. You guys teach this as “computer science”, as a “database” course. Totally and utterly insane. Which is why, yet again, as evidenced, you teach schizophrenia as “education”.

Humans do not break up the Atom. Humans do not re-define ACID or Atomicity, in order to get fragments of the broken-up Atom (pig poop) to “qualify” as “atomic”, and then lose control of their bowels when they try to re-constitute the fragments. Humans in 2021 understand Multi-Treading, we have had it since 1976, in the preceding form (Re-Entrant code) since 1965. But the intellectual level of you guys is still stuck in knuckle-dragging single-thread mode. Dare I say it, ok, because in order to understand multi-thread, you have to first understand single-thread. Hopefully you will obtain the goal of understanding Multi-Thread this century.

----
> > -- LOCK DURATION --
> > I offer getting rid of deadlocks [of that nature, due to lock duration,
> > as opposed to poor code], without changing either the database or the
> > app code, as a fixed price service. If you are interested, I can
> > provide proof.
>
> Yes, please. I do read your links.

SG was level 3 support for this customer, a large Australian Insurance company. They had many systems, both 3rd party apps and home-grown, mostly using one or more Sybase databases on a single server machine (Production). The core system (that all other systems used to some degree) was a 3rd party document handling app and database. The supplier had gone out of business, but it was really good in some ways, certainly ahead of its time when purchased, thus they retained it, and keep building systems around it. We supply support for unsupported (old) releases of Sybase. The core database was on a Sybase release that had been end-of-lifed ten years prior. We do not recommend moving to a later or current release, and consequently having to deal with a mountain of new problems. The app & db worked on a EOL release, we don’t fix something that is not broken, and we maintained that. The main problem in the 3rd party app and database was lock contention, including deadlocks, which slowly increased over time.

The on-site DBA (several over the decades), were quite alright in handling day-to-day problems, but hopeless in planning and executing any projects that would [obviously] alleviate the ongoing problems. We had offered to fix all problems without them having to change their app code, years before, but it was only when the number of deadlocks became untenable (interfered with work badly; staff queueing up at document readers; etc) that they said, ok, ok, just do it.

They had no idea that lock duration, not number of locks or lock types, was the empirical cause. And I was not about to teach a stressed DBA what he did not understand. The project was a straight server rebuild, all data structures, done properly, and allowing for a few years of growth. In case it is not obvious, that act causes (a) the data structures, and (b) the data therein, to be contiguous. The result is, hey presto, minimised lock duration, and therefore, elimination of lock contention caused by lock duration, total elimination of deadlocks.

//
Of course, the deadlocks are written in the app code, and that could not be touched (3rd party app; supplier out of business; no source code). They remain in their little underground caves, into which I have banished them, to show their miserable faces, some day, if and when lock duration increases, due to the data structures getting fragmented, and thus non-contiguous.

Hence I built the data structs to handle two years of growth without losing speed. That gave them enough time to replace the app & db completely, with a modern document handling system. Otherwise, just re-run the scripts.
//

When we examined the server for the purpose of quotation, we identified other various errors, that should sensibly be done together in a server rebuild. Eg. in order to maintain speed in the existing data structs, they had archived a bunch of data into a separate database, via VIEWS that had to be changed with each archive increment. That is a common fix-it that ignorant DBAs perform, it is ridiculous, there is no problem at all for a qualified DBA to maintain speed on a massive table. I reversed all those errors, and gave them single tables that were blindingly fast.

The project was one calendar month, which included tight scheduling of all resources from both SG and the customer; all scripting (we have a full set, just minor mods for local issues); all testing on test servers; proper allocation on the SAN; and the execution of the rebuild on the Production server itself, in a single Easter weekend (four tranches; ten hour days; scripts left running overnight; check and minor fix-its the next morning).

These are extracts from the executive report (the full report is 26 pages; the appendices [Before vs After, one line per index] is 56 pages), I had to be political or “soft” in order to minimise damage to the on-site DBA’s credibility. For the following, read DEADLOCKS:
_ application stability
___ (executive perception: causes at least the few lost transactions, but when the contention piles up (lock storm), rather than running around fixing each problem, the server has to be restarted, somethings that Sybase customer never have to do [all the servers I have looked after run for at least a month, downtime for scheduled purposes only] )
_ data contention
___ (that is the way the staff perceive it)

__ https://www.softwaregems.com.au/Documents/Reference/Reference%20T%20A.pdf
__ https://www.softwaregems.com.au/Documents/Reference/Reference%20T%20B.pdf
__ https://www.softwaregems.com.au/Documents/Reference/Reference%20T%20C.pdf

a. The above gives enough detail for a full executive level report, with pretty charts (extracts only shown).

b. If you want to inspect the index level issues, let me know, I will give you the appendices. Given your love for argumentation, no doubt I will live to regret it.

c. For server rebuilds, I guarantee at least 10 times improvement in speed, but I actually deliver much more. Speed (Before vs After) is measured two ways: the standard server monitoring in Sybase; and 20 nominated queries, their worst queries.

__ For the nominated queries, all ran 10 to 100 times faster. There were a few that ran 1,000 times faster, but mention of that caused a few political problems for the DBA, so I did not state it in the executive report.

__ For monitoring server performance, which really is throughput, which was both an ongoing requirement during the project, and a qualitative metric to demonstrate the guarantee and approve payment, the raw data was the standard Sybase server monitoring, which is massive. Our SG scripts process all that and produce a straight-forward report, such as this (for a different customer). I have given you this before, with notes on how to read it, with no response from you. Feel free to ask specific questions:
__ https://www.softwaregems.com.au/Documents/Article/Sysmon%20Processor/Sysmon%20Processor%20Eg%20Date%20Delta.pdf

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 26, 2021, 12:22:26 PM8/26/21
to
Nicola

> On Wednesday, 25 August 2021 at 02:36:37 UTC+10, Nicola wrote:
>
> Derek,
>
> I run commands at human speed, because I can't understand full-speed
> execution if I cannot understand snail-speed execution.

I don’t accept that from a teaching professor in a real world university.

Thanks for the admission. That is precisely one of the well-known problems with academia, it comes up regularly in our discussions, and it is a serious impediment to understanding ANYTHING in the real world. In my book, I have an entire chapter to deal with it. The chapter title is *Atomicity*, the problem is:

------------------------------
-- Denial of Atom --
------------------------------

I can’t give you chapter and verse, just some direction. Of course, I have repeated this many times, so I will try different words. The problem is rife, in the indoctrination system that passes for “education”, across all subject areas. Philosophically the problem is defined as *The Whole and the Parts*. I can’t give you the philosophical lecture, but you must understand that this was defined and resolved by Aristotle in 350BC. When you break it or deny it, you are operating at an intellectual level that is more primitive than the ancient Greeks. The indoctrination system that you teach, teaches young minds to deny reality, and to fabricate a fantasy, hence I assert that you teach schizophrenia (the alternate clinical term for schizophrenia is fragmentation).

The thing that exists in the real world is an Atom, the thing that people whose brains have not been scrambled by indoctrination, apprehend, is the Atom. First, you deny the Atom (the Whole). Therefore, second, you only perceive fragments (the Parts). Third, you are therefore self-condemned to deal with only the fragments (Parts). Totally clueless as to the Atom that makes sense of the fragments.

I see a cat, you see four legs; whiskers; and a swishing tail. I see an Invoice, you see an InvoiceId; 10 item lines; and an address. Sheila sees a car, you see the engine and worry about whether it is four-stroke or two-stroke. I see a Relational Key, you see only the components, and worse, you need 17 “NFs” to grapple with them. You look at a coccyx and perceive a vestigial tail because that validates you heavolution fantasy, I see a carefully designed structure that supports all the muscles attached to the hips and legs, permitting the only animal that can sit.

Sheila can drive the car without understanding anything about internal combustion engines, you can’t drive, you can’t even leave your desk, because you are figuring out how a four-stroke engine does not explode when the strokes obviously contradict each other. What is worse, you have the cheek to declare that no one can drive without a full understanding of an internal combustion engine.

In our recent exchange in this thread, we are dealing with two instances of Atomicity, that work perfectly in the real world, particularly in SQL Platforms, that (a) you are in denial of, and thus (b) you obsess about the fragments, (c) finding “problems” that exist only in the concocted abstraction of the asylum, and thus (d) you cannot comprehend how the real world simply does not have those “problems”.

You are seeing the moon in the river, and you are asking me to fish it out.

If you are going to succeed in this welcome crossing of the chasm between the isolated collective ideology that has been academia in this field, as evidenced, for fifty years, this century, you will have to start breaking your denial of the real world, and learn about how un-indoctrinated human beings think, how we have been thinking from 350BC to 1911 (the implementation of Modernism in science, and thus its destruction).

Because you are heavily indoctrinated in insanity; denial of reality; anti-science, you have to consciously choose sanity; reality; science. Starting at the foundation, the Four Laws of Thought. You have the God-given Power to think whatever you like, via Free Will, but you do not have the right to do so. Licence comes from conforming to the absolute, objective truth, which is the only Right. Right Thinking, is the foundation of Science.

In case it is not obvious, I am all for the exercise of the intellect, which is abstraction, if it is at least one level of abstraction away from reality. Whereas you guys don’t not have that constraint, and thus fantasy.

The two instances are:
_ ACID Transactions = The Transaction is Atomic
___ As detailed in the other thread, Atomic in spirit, in all deployments: server code (execution of submitted SQL, ensuring ACID; client- or middle-layer-side SQL; Transaction SQL code)
___ DO NOT BREAK IT UP
___ There is no “interleaving* of Parts
___ There is no re-constitution of Parts afterward (it would not be the original Atom)
___ If you didn’t break it up in the first place, you wouldn’t have to re-constitute it, in the second place, and suffer the problems of re-constitution in the third place.
___ Yes, I understand that this insane thinking is indoctrinated; inculcated into your mind, into “MVCC”, PoopGres, starting with the StoneFreaker and his hysterically self-contradictory Mantra, which just maintains the insanity of singing one thing while performing the opposite.

_ Contemplation of Transactions (eg. deadlocks)
___ Ditto, on all points

I appreciate that that is your starting point, and therefore I have to entertain it to some degree, in order to get you to see what lies on the other side, across the chasm. But that is an academic exercise only, that does not validate your thinking, the exercise is abstract, it does not occur in the real world. When I teach the course, I do not allow idiotic abstract argumentation, I allow real world concepts only, but here, I have to allow the idiotic concepts to some degree.

No, you do not have to work at snail speed, because that is fiddling and farting with the Parts while denying the Whole. You are sitting there in a position in which you cannot drive the car that Sheila can, thinking yourself superior. No, we don’t need to hear why you are superior, because it is utterly false, any “reason” for being unable to drive is utterly false.

> I am planning
> a full-speed benchmark, but bear with me for now.

Yeah, sure. But please stop this academic speculation about things you do not know, and get to a real world implementation as soon as you can. That, and only that is what I agreed to.
Re real world operation under stress, of course I am sure. Normally, I run the course only with access to a server, so that participants can try it immediately. Trying to imagine these things, to understand, without any hard evidence, is known to be impossible. It is *race conditions*, and you have no clue how to apprehend it (you need a benchmark that stresses the issues). Aside from a benchmark, operating at snail speed, I recommend you do the same, run two instances of isql for [T1][T2], and a third to watch, via sp_who & sp_lock.

Re the academic exercise, which is an abstraction, no, I cannot be sure, and I couldn’t care less. You can take that sequence, or reverse the sequence, and dream about the tooth fairy as much as you like. Just do not declare that that imagined sequence, of broken up Parts of an Atom that should not be broken into Parts, is a “problem” in the real world, do not make any declarations, because they will be schizophrenic; idiotic.

I entertained the academic exercise (abstract; non-real) to some degree, for the purpose of assisting your understanding only, but this is going beyond it, because you are implying real world consequences to your non-real; abstract notion. You cannot build the world from abstract notions, the concept is insane, anti-science. The purpose of science is to observe the real world and to DETERMINE the truth, not to PRODUCE the truth, from non-real abstract notions, which will be nothing but hysterical fiction because it is isolated from reality. You need to experiment with reality and determine the truth.

Likewise, precision in this academic example is not possible, it is for understanding only. So in that sense, no, I am not sure because it can be argued academically, either way, until the cows come home.

The second over-arching issue is this business of redefined terms, which guarantees laboured communication with real world implementors, as evidenced, an maintains you inability to produce anything intended for it. In case it needs to be repeated, I don’t have private definitions, I conform to the absolute definitions.

Finally, the main point is this. Your academic exercise can only be contemplated because you have broken up the Atom, the Atomic Transaction. That is the limit of my explanations and entertaining your academic exercise. I will not discuss it further.

----

Further, it would be silly to argue against it, especially to argue /from/ the manuals. If you push it, the explanation will break on precision, but the sense, the overall explanation of ESCALATION that I gave, will not change ... you will just feel better because some things that were uncertain in your mind become certain in your mind, which has nothing to do with reality.

MC-non-CC systems have a stupid primitive 2PL, you need to learn about the operations of a real Ordinary Lock Manager, such as ESCALATE and PROMOTE (which you are doing, very slowly). But it should be from a proper course or from real world experience (benchmark !!!), not from the manuals. It is not possible to learn a new product via the product manuals, you will form a skewed understanding. I am sure you know this. It is not possible to understand race conditions from an abstract notion, you need a benchmark. I have done hundreds, therefore I make declarations from a rock-solid body of experience, which is why I produce docs that explain things that the manuals cannot.

I will not argue for or against the content in the manuals, but I will give you this guidance, if only because your reference it:

§1.5 Table
is Lock Sufficiency, horribly incomplete for our discussion, because it gives only the genii (three) of locks (eg. “shared”), not all 66 LockTypes; the species (eg. Shared-Intent; Shared-Page; Update-Intent). Most people would interpret a “shared” lock as the one that is held fleetingly for read operations. A Shared-Page-until-EoT is not at all the same.

§1.4 Table
is Lock Compatibility), again five genii, not species.

Further, while Lock Sufficiency & Lock Compatibility are explained, badly, Lock Escalation is not explained at all. Therefore, I produced something that the Sybase community needed, decades ago, the Sybase Lack Manager PDF. The entire diagram on page 2 is Lock Escalation.

No, it is not a full text explanation, yes, it is a basis for discussion (that has not changed in 20-something years). Sorry, all my docs are dense, as declared, they are memory tags, the final page of *all* lecture notes containing *all* lectured points, to trigger the memory of the course in the reader’s mind. It cannot be readily understood by someone who is new to Sybase or a Lock Manager. I repeat, the 2PL academics know and love is filth, you have to get that out of your mind, in order to even begin to understand a real Lock Manager, that the real world has had since 1984.

> At least, this is my understanding from the compatibility table
> at §1.5 in ASE's Locking and Concurrency Control guide.

If you really mean §1.4 Lock Compatibility (not §1.5), that applies to [T2} at Time[3].

----
> >> [At time [4], Sybase] rolls back a transaction, which happens to be
> >> T1. T2 can now commit.
>
> > No. As explained in detail above. There is no deadlock. Both
> > Connections[T1][T2] succeed.
>
> Really? See above.

The [old] “above” in the previous post does not count, as explained in the new “above” in this post.

For the rest. If you argue this, it boils down to the millisec or microsec difference between Time[3] and Time[4]. The sequence of UPDATE statements can be switched, obtaining different results, obviously, but in any case, it is good for abstract understanding only. It does not portray the real world

I won’t discuss this further, it is an academic example for understanding, using hysterically stupid Parts of an Atom, while denying the Atomicity of the Atom. I especially won’t argue with a novice who reads manuals and obtains “certainty” from such an act. Feel free to run 3 isql sessions and obtain hold, cold evidence, at snail speed, or better yet, a benchmark to understand race conditions, at normal human speed. Try many different sequences.

----
> >> Anyway, in Example 2 it is not correct to talk about deadlocks in
> >> PostgreSQL, because the read statements do not acquire any locks, so
> >> it's not possible for the two transactions to be waiting upon each
> >> other. The update at time (3) proceeds normally (such update is local to
> >> T2). At time (4), though, T1 must be put on hold, because, according to
> >> the rule above, another transaction has concurrently updated the same
> >> record and is stil active. When T2 commits, PostgreSQL realises that T1
> >> cannot sensibly continue (because that would result in a lost update),
> >> and kills it. Now, T2 can commit.
> >
> > 1. Repeating:
> >>>
> > I think we agree: in PoopGres, where it is reported as (a) rollbacks,
> > or (b) “deadlocks”, or (c) “serialisation failures”,
>
> Only (b) and (c). (a) is a consequence of (a) or (b), not a separate
> error.

Ok

----
> > it is one logical category: the effects that should not happen, but
> > that do happen.
> Well, yes.
> > Let’s call that category FalseDeadlocks, because even the [b] reported
> > deadlocks are false, not caused by the app code, but internal.
>
> It may be that PostgreSQL deadlocks in situations in which Sybase does
> not. Not sure that Example 2 is such a case: waiting for your reply.

Yes it is. In both Examples, you provide evidence that PissGriss deadlocks or “deadlocks” and then rolls back, where Sybase does not. In the academic scope and discussion, it is certain. But I still would not make grand declarations about how that would be realised in the real world, until I have a benchmark. The cited cases certainly provide evidence that PoopGress plops itself when it should not, rather than a specific instance of some abstract example or this abstract Example.

----
> >> 2. Can there be a concurrent interleaving of operations that leads to
> >> a deadlock in Sybase, but the same concurrent interleaving does not
> >> yield a deadlock or a serialization error in PostgreSQL? No, because
> >> a concurrent interleaving of operations leading to a deadlock is an
> >> incorrect interleaving. So, if Sybase "encounters a deadlock
> >> situation", then PostgreSQL must also produce an error in that same
> >> situation, otherwise it would output an incorrect result. The
> >> difference is that PostgreSQL may output a "serialization error",
> >> rather than a deadlock error.
> >
> > Definitely not. As explained above, so I will not repeat.
>
> I don't think that what you have said (whatever Sybase does) contradicts
> my point. My point starts from the assumption that clients concurrently
> submit a set of transactions *and* Sybase returns 1205 to one of them
> (say, badly coded applications). That means that Sybase had to roll back
> a transaction to prevent incorrect behaviour.
>
> Now, if the clients submit the same transactions to a PostgreSQL server,
> *and* if the server schedules those transactions *the same way* as
> Sybase did, then PostgreSQL *must* return an error to one of them.
>
> So, the point is: PostgreSQL cannot make all the transactions commit
> where Sybase has rolled back one, because Sybase did that to prevent an
> incorrect execution. PostgreSQL would have a bug if it did not do the
> same.

1.
See, you are doing it again. You have this idiotic academic notion that you can have a set of Transactions that will be executing ala a mystical mythical mysterious “Schedule” that does not exist in Reality. And worse, second, that you can apply it to Sybase and get one set of results, and then apply it to PukeGress and get the same result. Third, that if the results are different, woohoo there is an error. Take a breath. Have a second coffee. No such thing exists. All three levels are false. Because the first is false. Now go and do something useful with your life, create something in the world, instead of fantasising about something is NOT in the world.

THERE IS NO SCHEDULE, ACCORDING TO WHICH YOU CAN ENTERTAIN SUCH EXPECTATIONS.

2. YOU ARE STILL NOT GETTING THE -- ATOM -- YOU CANNOT BREAK UP THE ATOM.

The whole abstract exercise is based on Atomic Transactions which you have broken up into Parts. I entertained that and explained some, in order to assist your incremental understanding. In the normal case, of Atoms NOT being split up, your examples simply do not exist. No more entertainment. Any further discussion must be real, any Transaction must be Atomic. Refer to my Transaction Sanity doc.

3. YOU ARE STILL NOT GETTING -- LOCK DURATION --

This is why people in the real world run benchmarks. Run 100 isql sessions that have:
a. no intentional deadlocks.
b. Then say 10 of 100 have intentional deadlocks against the 90 that don’t.
c. Then 20 of 100 intentional deadlocks.
__ You will slowly realise that because each Transaction executed in millisecs, they virtually never block. d. Then run 200 isql sessions with (a)(b)(c). You might get a few deadlocks. And so on.

Now do the same on PlopPlopGross. Seconds and tenths instead of millisecs. Oooooo, we have many deadlocks. Not because of “different behaviour” but because the herd is pathetically slow.

It does not prove that PlopPlopGross produces “errors” where Sybase does not (that is “true” at the denial-of-reality, abstract level, only), it proves that you can’t compare a horse with a blind elephant; or an Architected genuine server with a herd of stupid programs, you can’t run such a race and expect the same results. The expectation is stupid. It implies that a horse and a blind elephant are comparable.

Even though I know that PlopPlopGross is a mountain of pickled and curated pig poop, I will not be drawn into making a declaration such as you have concluded, because the basis of such conclusion is a stupid abstraction, not a real world determination.

----
> That means that Sybase had to roll back
> a transaction to prevent incorrect behaviour.

It is not “incorrect behaviour”, it is not an “error”, it is an event. A fleeting event. That may happen in one Sybase benchmark and not another Sybase benchmark.

The notion of a fixed “Schedule”, and second, an expectation of fixed results, is hysterically absurd. Ok, at snail speed, you can only try two or three competing isql sessions. When you get to operating at human speed, and perform some benchmarks, you might notice Sybase is multi-threaded, there is no “schedule” which implied single-threaded operation.

Of course, a brain-dead herd of programs, written by schizophrenics, who try desperately to materialise the collective fantasy of multiple offline stale versions, who have no concept of multi-threading, have to “serialise”, and for that they need a single-threaded “schedule”. Both the “serialisation” and the “schedule” and the breaking up of Atoms that should not be broken up, are hysterically stupid concepts that exist only in asylum of academia.

Do not impose that filth on us.

----
> *and* if the [PG] server schedules those transactions *the same way*

How on earth are you going to get that to happen ??? It is 100% pure insanity. As an imagination, for academic discussion only, fine. When you imagine that imagination to be real, you cross the line into insanity.

We know (science; knowledge) that if a bunch of amino acid were bombarded with a trillion amino acids for 42 million years (setting aside the obvious fact that that first bunch of amino acids would be dead, in order to prevent ruin of the story), there is a
__ 1 in 10^170
chance that it will form into a protein. Imagining that it COULD happen is like believing in the tooth fairy, stupid enough, but thinking that it ACTUALLY happened, that that amino acid DID evolve into a protein, and thence into a life form, and thence into a monkey, and thence into a rational spirit being, is hysterically insane. But it is the propaganda, on every media channel, 24 hours by 7 days, every week, including Christmas, that you are indoctrinated into. So you feel quite comfortable; it is a familiar “thought” process, properly understood for what it is, indoctrination, imagining things that do not exist, and then thinking that it actually exists, that it actually can happen.

----
> The vice versa, however, is not true (that was my other remark). There
> are cases in which PostgreSQL rolls back some transaction, but Sybase,
> under the same conditions, is able to commit all of them. This is one
> such situation:

I don’t know why you try, it is already proved, academically, by you, in the first two examples.

> Example 3
>
> Wall time | T1 | T2
> ----------|-------------------|------------------
> (0) | begin |
> (1) | | begin
> (2) | | update Tom's data
> (3) | update Tom's data |
> (4) | | commit
> (5) | commit |
>
> While Sybase makes T1 wait until T2 commits and then commits T1,
> PostgreSQL rolls back T1 as soon as T2 commits.

As an academic example, sure.

As a real world possibility, it is the usual insanity; nonsense, because you are splitting the Atom (the Whole) (everything between each BEGIN-COMMIT pair), and then fiddling and farting with the Parts, like the other famous shell game you guys play to “find” the Key because you have denied the extant Key. In the real world, Atoms remain Atoms, not fragments. Even on the PeePeeGres side, as long as the code is contiguous, as opposed to being artificially spread out on a piece of paper, [T2] will execute after [T1], there will not be a rollback.

In academia, they deny the Atom; they split the atom into fragments, and they live and breathe the fragments.

> I agree with you that this is bad.

So please, in order to establish your presence in the real world, that academia has some knowledge to offer, that is beneficial, do something about the mountain of false marketing re “MVCC”, the hilarious self-contradicting Mantra; the false statements in the manuals. Tell the world that StoneBroken is a disgusting drug addict, his entire “MVCC” notion, and everything built by his cultists, is false. As evidenced in all “MVCC” herds of programs, which are desperately trying to materialise insane fantasy.

Cheers
Derek

Nicola

unread,
Aug 29, 2021, 9:11:23 AM8/29/21
to
On 2021-08-26, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
> As a real world possibility, it is the usual insanity; nonsense,
> because you are splitting the Atom (the Whole) (everything between
> each BEGIN-COMMIT pair), and then fiddling and farting with the Parts,
> like the other famous shell game you guys play to “find” the Key
> because you have denied the extant Key. In the real world, Atoms
> remain Atoms, not fragments. Even on the PeePeeGres side, as long as
> the code is contiguous, as opposed to being artificially spread out on
> a piece of paper, [T2] will execute after [T1], there will not be
> a rollback.

So, I have run some tests (btw, thanks for the tips from the Sybase
questions thread!). One of them is based of one of my previous examples,
which I summarize here for clarity (full code at the end of this
message):

T1: select Tom's balance; update Tom's balance.
T2: update Tom's balance.

In Sybase:

- when the statements of T1 and T2 are submitted to the server
individually, but within a begin.. commit block, then I can obtain
a deadlock. I think that this is to be expected. It's like the "snail
speed" manual test.

- Interestingly, when T1 and T2 are implemented as stored procedures, no
deadlocks occur, even when the number of concurrent threads is very
high (I have tried with up to 1000 threads), no matter whether I open
two connections overall or one connection per thread (although the
free version has a limit on the number of connections, so I could use
<100 threads in this case). So, it appears that no context switch ever
happens during the execution of T1 (resp., T2). Is that a correct
interpretation?

- I have added an artificial delay in T1 between select and update.
Still, no deadlocks, although I get several 5s timeouts. Is that
because a transaction is rolled back if it's waiting for too long?

In PostgreSQL:

- you cannot set the isolation level within a stored procedure; that
must be done before calling the stored procedure.

- Even with a couple of threads, a serialization failure can be obtained:

ERROR: could not serialize access due to concurrent update

PostgreSQL's behaviour is consistent with what was previously discussed,
so I don't have anything else to add, except that

> Even on the PeePeeGres side, as long as
> the code is contiguous, as opposed to being artificially spread out on
> a piece of paper, [T2] will execute after [T1], there will not be
> a rollback.

I find that not to be the case. If you know how I could run T1 and T2
concurrently without errors, please let me know.

Nicola

####### RUBY SCRIPT FOR ASE ##########################################
# Requirements:
# gem install sequel
# gem install tiny_tds
require 'sequel'
require 'tiny_tds'

opts = {
adapter: 'tinytds',
login_timeout: 5,
timeout: 100000,
tds_version: '50', # 42 or 50
host: 'localhost',
port: 5000,
database: 'scratch',
username: 'sa',
password: ''
}

DB = Sequel.connect opts

begin
puts ("Dropping data...")
DB.run("drop table Account")
DB.run("drop procedure t1")
DB.run("drop procedure t2")
rescue
end

puts "Populating db..."

DB.run("create table Account (
name char(5) primary key,
balance integer)
")

DB.run("insert into Account(name, balance) values ('Tom', 40)")

DB.run("create procedure t1 as
set transaction isolation level serializable
select balance
from Account
where name = 'Tom'

-- waitfor delay '00:00:00.5'

update Account
set balance = balance - 10
where name = 'Tom'")

DB.run("create procedure t2 as
set transaction isolation level serializable
update Account
set balance = balance + 20
where name = 'Tom'")

t1 = [] # List of threads running t1
t2 = [] # List of threads running t2
N = 100 # Number of concurrent threads for each procedure
numerr = 0

stime = Time.now
puts "Started At #{stime}"

db1 = Sequel.connect opts
db2 = Sequel.connect opts
# db1 = []
# db2 = []
N.times do |i|
t1[i] = Thread.new {
begin
# db1[i] = Sequel.connect opts
# db1[i].run("t1")
db1.run("t1")
rescue Exception => e
numerr += 1
puts "FAIL: #{e.message}"
end
}

t2[i] = Thread.new {
begin
# db2[i] = Sequel.connect opts
# db2[i].run("t2")
db2.run("t2")
rescue Exception => e
numerr += 1
puts "FAIL: #{e.message}"
end
}
end

# Wait for all the threads to complete
N.times do |i|
t1[i].join
t2[i].join
end

etime = Time.now
puts "End at #{etime}"
puts "Elapsed time: #{(1000 * (etime - stime)).round()}ms"
puts "Number of errors: #{numerr}"
puts "Account table:"
DB.fetch('select balance from Account') { |r| puts r }
######################################################################
# $ ruby test-ase.rb
# Dropping data...
# Populating db...
# Started At 2021-08-29 15:05:43 +0200
# End at 2021-08-29 15:05:43 +0200
# Elapsed time: 178ms
# Number of errors: 0
# Account table:
# {:balance=>1040}


####### RUBY SCRIPT FOR POSTGRESQL ###################################
# Requirements:
# gem install sequel
# gem install pg
require 'sequel'

DB = Sequel.postgres('scratch', user: 'nicola', password: '', host: 'localhost')

puts ("Dropping data...")
DB.run("drop table if exists Account")

puts "Populating db..."

DB.run("create table Account (
name char(5) primary key,
balance integer)
")

DB.run("insert into Account(name, balance) values ('Tom', 40)")

DB.run("create or replace procedure t1() language sql as $$
select balance
from Account
where name = 'Tom'
for update;

update Account
set balance = balance - 10
where name = 'Tom';
$$")

DB.run("create or replace procedure t2() language sql as $$
update Account
set balance = balance + 20
where name = 'Tom';
$$")

t1 = [] # List of threads running t1
t2 = [] # List of threads running t2
N = 1 # Number of concurrent threads for each procedure
numerr = 0

stime = Time.now
puts "Started At #{stime}"

db1 = Sequel.postgres('scratch', user: 'nicola', password: '', host: 'localhost')
db2 = Sequel.postgres('scratch', user: 'nicola', password: '', host: 'localhost')

N.times do |i|
t1[i] = Thread.new {
begin
# db1.transaction(isolation: :serializable) do
db1.run("set transaction isolation level serializable; call t1()")
# end
rescue Exception => e
numerr += 1
puts "FAIL: #{e.message}"
end
}

t2[i] = Thread.new {
begin
# db2.transaction(isolation: :serializable) do
db2.run("set transaction isolation level serializable; call t2()")
# end
rescue Exception => e
numerr += 1
puts "FAIL: #{e.message}"
end
}
end

# Wait for all the threads to complete
N.times do |i|
t1[i].join
t2[i].join
end

etime = Time.now
puts "End at #{etime}"
puts "Elapsed time: #{(1000 * (etime - stime)).round()}ms"
puts "Number of errors: #{numerr}"
puts "Account table:"
DB.fetch('select balance from Account') { |r| puts r }
######################################################################
# $ ruby test-postgresql.rb
# Dropping data...
# Populating db...
# Started At 2021-08-29 15:04:55 +0200
# FAIL: PG::TRSerializationFailure: ERROR: could not serialize access due to concurrent update
# CONTEXT: SQL function "t2" statement 1
# End at 2021-08-29 15:04:55 +0200
# Elapsed time: 13ms
# Number of errors: 1
# Account table:
# {:balance=>30}

Nicola

unread,
Aug 30, 2021, 3:39:27 AM8/30/21
to
On 2021-08-29, Nicola <nic...@nohost.org> wrote:
> - Interestingly, when T1 and T2 are implemented as stored procedures,
> no deadlocks occur

Wait. I haven't started a transaction. After adding begin transaction..
commit to each stored procedure, I do get deadlocks. That makes more
sense to me.

Nicola

Derek Ignatius Asirvadem

unread,
Aug 30, 2021, 4:51:57 AM8/30/21
to
Nicola

> On Sunday, 29 August 2021 at 23:11:23 UTC+10, Nicola wrote:
> On 2021-08-26, Derek Ignatius Asirvadem wrote:
> >
> > As a real world possibility, it is the usual insanity; nonsense,
> > because you are splitting the Atom (the Whole) (everything between
> > each BEGIN-COMMIT pair), and then fiddling and farting with the Parts,
> > like the other famous shell game you guys play to “find” the Key
> > because you have denied the extant Key. In the real world, Atoms
> > remain Atoms, not fragments. Even on the PeePeeGres side, as long as
> > the code is contiguous, as opposed to being artificially spread out on
> > a piece of paper, [T2] will execute after [T1], there will not be
> > a rollback.
>
> (btw, thanks for the tips from the Sybase
> questions thread!).

You are welcome.

> So, I have run some tests

Good work.

Given that the test is early stage (not mature), and the detail in your post, I think it is best to respond with notes, rather than responding to each item that you raise. Of course, you have an intent, to prove/disprove something, but it may be a bit too early to form conclusions ... what you have is just enough “proof” to validate your pre-existing belief, rather than a proper stand-alone proof. Mostly, I am qualifying the test, so that it can progress to maturity, and it gives the proof you seek.

1. This is not a benchmark, but a stress test, which is fine.

2. Ruby is an additional layer, and it uses ODBC, which is a slow connection. For a benchmark, that will not be good enough, it is better to use ISQL directly (and eliminate ODBC): everything in your code can be done just as easily with ISQL.

3. On the ASE side, we have to use ASE terminology. They are not threads, they are connections [to the server]. Threads are only in the server, completely internal. Real threads on the real CPU. Whatever this is set to:
__ sp_configure “max online engines”

I appreciate that Ruby has “threads”, I don’t know if that means it executes:
a. 1,000 concurrent instances of the code (meaning client-side “threads”, 1,000 instances of ISQL)
__ or
b. 1,000 iterations of the code on 1 instance of ISQL

4. I don’t know how you can run 1,000 client-side “threads” with the free version which has limited connections.

What is the limit ?

5 ----
> One of them is based of one of my previous examples,
> which I summarize here for clarity (full code at the end of this
> message):
> T1: select Tom's balance; update Tom's balance.
> T2: update Tom's balance.
>
> In Sybase:
> - when the statements of T1 and T2 are submitted to the server
> individually, but within a begin.. commit block, then I can obtain
> a deadlock. I think that this is to be expected.

To be clear,
that means manually, in isql or equivalent, for the purpose of learning and understanding, as advised, and NOT simulating the real world (where there is no appreciable execution time between the BEGIN-COMMIT pair [because there is not a tiny nano-robot to halt execution in the middle of a Transaction (which is not-real) until we finish counting our toes] ).

But excellent learning, nonetheless.

6 ----
The code doesn’t have BEGIN-COMMIT. I expect there are several versions of the code, and the version you attached simply does not have it, but the code you tried in this particular does have it.

The thing to be careful about is
__ SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
holds locks until end-of-transaction ... but when there is no BEGIN-COMMIT, there is no Transaction to define that point, therefore when there is no BEGIN-COMMIT locks are held until end-of-statement.

You can check this via (Building Blocks manual):
__ SELECT @@TRANCOUNT

You also have to ensure that chained transactions (aka autocommit) is OFF (session level):
__ SET CHAINED OFF

> It's like the "snail
> speed" manual test.

Exactly.

My declarations proved.

7 ----
> - Interestingly, when T1 and T2 are implemented as stored procedures, no
> deadlocks occur, even when the number of concurrent threads is very
> high (I have tried with up to 1000 threads),

Exactly.

My declarations re ASE and real-world operation are proved.

7.a Is that with/without BEGIN-COMMIT ?
7.b Is that with/without the WAITFOR 0.5 secs ?
___ Scratch that, answered later, it is without.
7.c Is ASE running in Process or Thread mode ?

To be clear
that means NOT manually, NOT for the purpose of learning and understanding, but simulating the real world (where there is no appreciable execution time between the BEGIN-COMMIT pair [because there is not a tiny nano-robot to halt execution until we finish counting our toes] ). That is without the WAITFOR.

Sure, withe the WAITFOR is worth testing, IFF your intent is to simulate the user interaction, in which case the WAITFOR must not be in the Transaction (between BEGIN & COMMIT), but between Transactions.

8. From a third connection, to watch while giving connections[T1}[T2] start/stop commands:
__ sp_who
__ sp_lock

9 ----
> no matter whether I open
> two connections overall or one connection per thread (although the
> free version has a limit on the number of connections, so I could use
> <100 threads in this case).

Sorry, I cannot understand that.

10 ----
> So, it appears that no context switch ever
> happens during the execution of T1 (resp., T2). Is that a correct
> interpretation?

Context Switches happen at four specific levels, which do you mean ?

a. Machine/Linux - use vmstat

b. ASE is designed as a genuine server, that means it expects to be the only app on the machine. For each o/s it is [default level] somewhat bound to the o/s, and further, there are sp_configure parms that bind it to the o/s even more. And it runs Threads, machine Threads. The first thing to understand is, when it is idle, it does not release the Thread (that would cause 2 x machine context switch, which is expensive). It waits for a [configurable] time, expecting an event such as completed I/O or Lock Release. When it does release a Thread, it is called a VOLUNTARY YEILD.

c. ASE/Task. For each task (connection; session, spid), it maintains a full internal Stack (executable code); etc. When it schedules a task out, and another task in, yes, it context switches. Look at this monitor report, second §, Kernel: Context Switch has 16 species (important info for performance & tuning) (15 Task species plus server Voluntary Yield):
__ https://www.softwaregems.com.au/Documents/Article/Sysmon%20Processor/Sysmon%20Processor%20Eg%20Date%20Delta.pdf

Same as machine context switches, task context switches are expensive, to be tuned down (we want tasks to use the entire timeslice). Without looking too hard, just looking at the shape (width) of the numerics, one can see that the network is slow; that the Log and Locks have been tuned.

d. The fourth level is not a genuine context switch, but what developers often call a “context switch”. They have a perspective from their code only, looking at the sever, instead of reality, which is the reverse. But we have to entertain their silliness, instead of arguing each and every time. By this they mean anything that causes their code to context switch out. Of course, it is a subset of [c], it simply means they exclude server internals that they do not understand.

I suspect you are in [d]. Please note the above, which you must learn and understand over time.

In attempting to answer your question, I have to assert [c]. First, how many ASE Engine Threads do you have ? I expect way more than 2. Therefore both [T1][T2] will not be context switching at all, in the [d] sense. In reality, both [T1][T2] will be context switching all the time, because it is an ASE Thread that runs each task, if at all, and those species would be the set [c] minus set [d].

Bottom line
If I understand the intent of your question correctly, connection [T1] is context switching as much as [T2}, and the context switches do not matter.

Relevance
In terms of relevance, you will be context switching only for server internal reasons, eg. Cache Miss (not in this case because the one page is in cache) or Lock Release or Log Write (which is militated by how well you have tuned the Log), and ASE simply being /idle/.
__ Given the tiny load, /idle/ would be the largest metric.
__ For your stress test, on a default-config server, Lock Waits would be the only metric that is of concern.
__ For the future (benchmarks), you must configure all ASE resources better, particularly the Log, and then the other Context Switch species will become relevant, as they are tuned and can be reduced.

11 ----
For any benchmark, for any stress test, wherein you need to know what the server did/did not do, as per expectations, and to ensure that nothing unexpected [that would skew the test] did/did not happen, you need a server monitor report.

a. There are great 3rd party graphical tools, but they are expensive.

b. The heavy-duty SAP/Sybase Monitor Server. Recommended for serious heavy-duty benchmarks only (eg. TPC). It runs as an OpenServer, mapping onto ASE Shared Memory. So it is extremely powerful, with zero overhead. Licence fee, not recommended. Identified for understanding only.

c. There is a modern ASE set of tables, but they have to be set up, and SQL/scripts have to be written. And they have overhead, which defeats the purpose of a benchmark. So the monitoring activity has to be sp_configured carefully, for the particular purpose. Too much work. Because they are problematic, SG has a full set of scripts for all this, which eliminates the problems. But I never use them.

d. The tried and tested, decades-old method, that we have had from Day One, beloved by all the old Sybase hands: sp_sysmon. Simple, character based, zero overhead, can be used for any kind of monitoring: here a benchmark; there some lock problem diagnosis; etc. For Production servers, I obtain one per hour (so that the counters do not overflow, every 30 mins if they do).

Before the trial, to clear the counters, run:
__ sp_sysmon begin_sample
After the trial run:
__ sp_sysmon end_sample
and collect the report. That will give the activity in monitor stats, for the duration. It looks like this:
__ https://www.softwaregems.com.au/Documents/Article/Sysmon%20Processor/SEQUOIA_120806_sysmon_09

//
The YNHH monitor reports that I have linked in the past, and here again below, are from their Production server, ie. 24 sp_sysmon [above] reports per day. Which are then processed by our **Sysmon Processor** scripts, to produce any kind of comparison or trend. It is massively useful for problem diagnosis, and it produces .csv for erection of pretty charts in Numbers/Excel.

This one is a Before/After report of a long-advised SAN reconfiguration, that finally happened, comparing two daily figures (the last column is Delta). Noteworthy again, the problem was **Lock Duration**, not number of locks, not types of locks, not the app code (which is 3rd Health System, that cannot be changed or affected). Look at the § Lock Manager.
__ https://www.softwaregems.com.au/Documents/Article/Sysmon%20Processor/Sysmon%20Processor%20Eg%20Date%20Delta.pdf
//

12 ----
> - I have added an artificial delay in T1 between select and update.

As explained above, if you are simulating a delay due to user interaction, the WAITFOR must be after the COMMIT, before the next BEGIN TRAN.

Therefore, in this stress test, you are testing a different thing, a simulated LockWait.

It is not a true test (ie. good enough to publish), but it is fair enough to prove/disprove the specific thing you question.

> Still, no deadlocks,

No surprise. As I expected. My declarations re ASE and real-world operation are proved.

13 ----
> although I get several 5s timeouts. Is that
> because a transaction is rolled back if it's waiting for too long?

a. Rollback
No. A Rollback causes an Error 1205 to be RAISED in the client, and you will be well and truly notified. Ruby will throw an exception of some sort.

b. Deadlock
ASE retries automatically 5 times [configurable] (not seconds) before declaring “deadlock”, so the client-side code does not have to. These are not deadlocks or potential deadlocks, these are LockWaits.

c. LockWait
Yes, of course you have LockWaits, because you coded it. ASE [default] Lock Wait Period is infinite [configurable].

//
Infinite is a basic OLTP design article: eg. you can diagnose locking problems; find and kill the errant spid; correct bad code; etc. Proved as such for forty years. Best to leave it at infinite, do not change it. If you do, it will mess up your understanding of these issues, and complicate everything.
//

d. TimeOut
There is no timeout in ASE or the client. The timeout must be in Ruby, common in that kind of layer. From the evidence, the timeout period is 5 secs. //Which is not the same as your WAITFOR 0.5 secs.//

e. I don’t know if you will understand this, but I will try. Here again, you might notice, the relevance of Loch Duration. Compare with the previous test (no artificial WAITFOR). Even in such a simple stress test, it is a simple but classic proof that Lock duration causes this kind of error, that when lock duration is zero, there is no limit to the number of concurrent Transactions.

14 ----
> In PostgreSQL:
>
> - you cannot set the isolation level within a stored procedure;

a. That is anti-SQL.
b. That is anti-ACID, you cannot write Transaction “stored procs”.
c. That proves what I said, the “sql” in PG is not implemented as a language.

> that
> must be done before calling the stored procedure.

From another “language” context.
Same sort of problem, as in older versions, wherein “functions” were “transactional”.

> - Even with a couple of threads, a serialization failure can be obtained:
>
> ERROR: could not serialize access due to concurrent update

No surprise, I predicted that, and did so long ago, from the only study of its “MVCC” and “ACID” implementation.

The surprise is that it cacks itself at just 2 threads.

> PostgreSQL's behaviour is consistent with what was previously discussed,
> so I don't have anything else to add,

Same here. Good stress test, even at default and with Ruby, it exposed what you wanted to expose.
__ Sybase blows the doors of PlopPlopGres.
__ MickeyMouseGres cannot perform its much-marketed “MVCC”, the Mantra is false, even with MV-non-CC+2PL and a mild load, it cacks itself.

15 ----
> except that
>
> > Even on the PeePeeGres side, as long as
> > the code is contiguous, as opposed to being artificially spread out on
> > a piece of paper, [T2] will execute after [T1], there will not be
> > a rollback.
>
> I find that not to be the case. If you know how I could run T1 and T2
> concurrently without errors, please let me know.

a. I don’t sleep with sows, so I can’t help you there, sorry. That is why I don’t make low-level declarations about PG, but I give you links (eg. SO/PG/deadlock) and ask you to confirm.

b. Your conclusion is good. My prediction is incorrect. I expected PG to be better than that.

c. But more than your conclusion. What this has proved to me is, the PG “stored proc” speed is equivalent to ASE manual “snail” speed.

d. If you appreciate that the ASE stored proc test (i) proved what I stated about ASE, (ii) that LOCK DURATION is the issue, and (iii) even with 100 (1,000 ?) “threads”, there are no collisions
__ vs
the PG “stored proc” (assumed to be compiled and Query Optimised to some degree) operating in MV-non-CC+2PL: it (iv) performs incorrectly, and (v) like a snail on the same machine.

16 ----
If you run this stress test again, run some more of your different academic tests, which will provide more learning.

-- 17 --
When you run any set of tests, for each test that you describe, please include the report block at the end:
> ######################################################################
> # $ ruby test-ase.rb
> # Dropping data...
> # Populating db...
> # Started At 2021-08-29 15:05:43 +0200
> # End at 2021-08-29 15:05:43 +0200
> # Elapsed time: 178ms
> # Number of errors: 0
> # Account table:
> # {:balance=>1040}

Please add whether it is:
__ BEGIN-COMMIT
__ Artificial WAITFOR and period
__ row count or number of iterations (I appreciate in this instance, Balance $1040 is proof)

-- 18 --
In formal benchmarks, and for any test that these may be relevant:
__ sp_sysmon report
__ vmstat report
__ iostats (if disk i/o is being benchmarked)
__ sp_configure (number of threads; etc)
__ machine resources
__ O/S setting that are relevant

Cheers
Derek
Message has been deleted

Derek Ignatius Asirvadem

unread,
Aug 30, 2021, 5:26:40 AM8/30/21
to
> On Monday, 30 August 2021 at 17:39:27 UTC+10, Nicola wrote:
> > On 2021-08-29, Nicola wrote:
> > - Interestingly, when T1 and T2 are implemented as stored procedures,
> > no deadlocks occur
> Wait. I haven't started a transaction.

Yes, that was not clear. Refer my questions in my response.

> After adding begin transaction..
> commit to each stored procedure, I do get deadlocks. That makes more
> sense to me.

1. Per my comments, re the lock duration, yes, that makes more sense.
2. But from a Transaction perspective, no, it doesn't make sense to me.
__ stored proc [T1] is not holding anything that [T2] holds, so a deadlock between them is not possible.
3. Therefore (debugging this remotely, with the little detail provided), it would be that you do have several concurrent [T1] connections that are deadlocking each other, nothing to do with [T2].

Best to read my response and provide more detail (summary) for each test.

Cheers
Derek

Nicola

unread,
Aug 30, 2021, 3:42:46 PM8/30/21
to
Correct. Deadlocks still happen if I remove T2.

> Best to read my response and provide more detail (summary) for each test.

Sure, and I'll design a better experiment (these are not
benchmarks—yet): my first attempt has been pretty naive.

Nicola

0 new messages