Nicola
Please consider in the truth of learning what an OLTP context is, in a Ordinary Locking server (falsely known as “1PL’; “2PL”; “C2PL” ... “SS2PL”; etc). Not in the MVCC mindset, and definitely not what you think ACID or ACID Transaction is. But what it really is, in the science, in the platforms since 1960. SQL Platforms sice 1984. That excludes Stonebraker; Ingres; its zombie son PusGres; and Oracle.
Second, please consider the OLTP context, all ACDI Transactions sans the Batch Transaction, only. After that is clear (no questions) then add the Batch Transaction (high-end OLTP only) to the consideration.
> On Saturday, 19 June 2021 at 00:22:26 UTC+10, Nicola wrote:
All code must be standard, eg. there may be a template. The Transaction stored proc (as distinct from the ACID Transaction in it, that it is named for) contains at least three code blocks, four in high-end OLTP. The EXECUTE Block is Atomic in the ACID sense. The stored proc is Atomic in the sense that it is a single code segment that the caller calls and is executed (partly or wholly).
(In the simple sense, for newbies, ACID Transaction = stored proc. But of course, *all* code is standardised, we have templates, the stored proc is not ONLY the ACID Transaction, it has to have a structure; a Form; error handling; etc.)
______________________________________________
0. THERE IS ONLY ONE VERSION OF ANY FACT
__ Schizophrenics are prevented from writing code of any kind.
__________________
1. VALIDATE Block
__ In CICS/TCP/COBOL terms, this is a PREPARE block.
You know the Fives P’s, absolutely essential for succeeding at anything ? Eg. an Olympic swimmer; a deer hunter; a programmer of ACID Transactions.
__ Proper Preparation Prevents Poor Performance.
>>>>
We know we are locking rows, implicitly, by virtue of the SQL commands we issue, but it is the server that does the locking and the resolution.
• We know that Reads are locked for the duration of the Read operation, and Writes are locked until the COMMIT/ROLLBACK.
__• We can also request a Read to be locked until COMMIT/ROLLBACK, but that requires a Transaction to be opened, and a server that [is genuine OLTP and therefore] allows it.
<<<<
Code:
• Read all rows that must exist, in order for the intended rows (INSERT/UPDATE/DELETE) to succeed.
__• RETURN on any failure.
• Check NOT_EXISTS all rows that must not exist, in order for the intended rows (INSERT/UPDATE/DELETE) to succeed.
__• RETURN on any failure.
• Transaction NOT yet started
• All locking is transient and very short duration
• TransactionLog is not touched
• Fall into ...
__________________
2. EXECUTE Block
* BEGIN TRAN
• TransactionLog activity commenced for this xact
• Read all rows that must exist, in order for the intended rows (INSERT/UPDATE/DELETE) to succeed.
__• ROLLBACK & RETURN on any failure.
__• the only relevant locking starts here
__• if blocked by another xact, you are blocked here (NOT after the first verb below)
__• the longest wait is for the longest xact in the system that touches the page
So here we code each SELECT as:
__ IF EXISTS (
____ SELECT ...
______ FROM Hotel
______ WHERE ...
______ { FOR UPDATE | HOLDLOCK } -- platform-specific syntax, “beyond” SQL
____ )
__• <-[2.1]
• Check NOT_EXISTS all rows that must not exist, in order for the intended rows (INSERT/UPDATE/DELETE) to succeed.
__• ROLLBACK & RETURN on any failure.
• (INSERT/UPDATE/DELETE)
__• ROLLBACK & RETURN on any failure.
• COMMIT TRAN
• TransactionLog is released.
___________________
3. CLEAN-UP Block
• All error handling
• ROLLBACK TRAN (if structured, the only ROLLBACK)
• <-[3.1]
• RETURN (if structured, the only RETURN)
______
Client
There is ordinary interplay between the Client and the Server. The client may now be a GUI or a webpage or middleware in an app server.
The VALIDATE block [1] is repeated in the client. This is not a duplicate, because it is complying with the Commandment, it would be stupid to attempt something that will fail. We do not need to engage server lock resources and the TransactionLog to find out that something will fail, we can check for it without engaging that. Besides, such an act would get the pages into the cache, if it is not already there, and then, when the EXECUTE block starts, the required pages are in the cache, warm and buttered, waiting to be served up.
_________________
The ACID Transaction is bounded by BEGIN TRAN...COMMIT/ROLLBACK TRAN. It doesn’t even start unless it can complete, which is validated before the start.
> how do you ensure that
> nothing happens between the instant VALIDATE completes and the instant
> BEGIN TRAN is executed?
That duration between instants is:
a. not relevant [assuming you understand the above], and
b. microseconds, because it is between two consecutive steps inside a contiguous stored proc, which is running as object + plan code (compiled and a full resource plan established), and
c. harmless because no resources are held.
__________________
> If an atomic processing unit is
> delimited by BEGIN TRAN...COMMIT,
The /processing unit/ wrt ACID Transaction which indeed must be Atomic, is bounded by BEGIN-COMMIT TRAN, in one contiguous code block.
The /processing unit/ that contains it is not Atomic in the ACID sense, but it is Atomic by design, a contiguous code block; and optimised (compiled + resource plan). Eg. I do not allow that to be non-contiguous. The two need not be the same /processing unit/.
This is also why the BEGIN TRAN must never be in the client.
__________________
Batch Transaction
Add:
2.1 Add the data-level lock
___ BEGIN TRAN
___ UPDATE HotelChainIsDeleted[ @HotelChain ] = 1
___ COMMIT TRAN
3.1 Remove the data-level lock (in our example this ois not required, because it is the last row deleted).
___ BEGIN TRAN
___ UPDATE HotelChainIsDeleted[ @HotelChain ] = 0
___ COMMIT TRAN
You may consider the BEGIN-COMMIT TRAN superfluous. No, They are demanded by the standard. Further, they are anchors for code investigation, and on the server vis-a-vis monitoring metrics.
_________________
> And I have a couple more questions, if you don't mind:
I would be pleased to answer.
> 1. What you are describing is built on top of the standard locking
> mechanism provided by the DBMS.
If you mean the normal ACID Transaction in the full OLTP Context, no. It is aware of, and implicitly uses the locking in the server (whatever that is).
If you mean the Batch Transaction, yes. I said so at the start. That is why I call it a data-level lock, it has to be done in data, we have no direct access to the locks, and we never do idiotic things such as:
__ LOCK TABLE
> If I understand correctly, it requires
> the transactions to be coded to perform explicit checks before accessing
> the data.
Yes.
All ACID Transactions.
It is the same “explicit” checks as the “explicit” operations it will attempt, no less, no more.
We have been doing that since 1960, on all OLTP systems.
Welcome to the industry in the real world, that you guys have never served.
Thank you for visiting.
Please come again.
> At the outset, that sounds like something that can become very
> complicated and error-prone.
How ? Why ?
Sure, we use templates, and the developer has to be educated in more than the schizophrenic MVCC mindset, but I can’t see how it is “complicated” (see detail above, or “error-prone”. In any case, the errors will be caught in development DEV, long before TEST and UAT.
It is 2021, yes ? No one codes SQL any more. We have been using IDEs since the early 1990’s. Now there are free IDEs. Of course PowerBuilder is still the best and most integrated, and the only one that has DataWindows, which is why it is in a separate class of its own. There are many imbeciles who think and work like SQL is a “high-level language”. It isn’t. It is a low-level data sub-language, it is cumbersome (must needs be, unless you have single characters for your file and field names). You can’t go lower than SQL.
>>>>
Except in high-end servers, where you can write query plans or modify the QPs that the compiler produced. The need for this is essentially to work-around problems caused by bad file design. Since I have only Relational tables, I have never had to use this facility, but being a performance expert, I have corrected at least 100.
<<<<
>>>>
Now for SG customers, there is much more. I give the OLTP Standard; a set of Templates; and a check list which must be filled by a peer, not the coder, for each xact_sp, before submission to the DBA (to place it in TEST Rdb). The DBA is the policeman, and has to check the last check box, a second layer of checking plus resolution of names; etc, before signing.
<<<<
> E.g., it seems that a transaction to insert
> a new ReadingMeasure should check many things (HotelChain, CountryCode,
> ..., Application) before proceeding.
Yes.
In order to comply with the Commandment.
Every single time.
BEFORE starting the xact.
Would you NOT want to check the depth of the pond before you dive into it ?
Would you NOT check if you should use the railway station or the airport before travelling to Timbuktu ?
Would you check that <PlaceName> exists before selling someone a ticket to <PlaceName>.
What is the big deal ?
Welcome to the exciting REAL world of the industry, it has nothing in common with the “industry” as defined by theoreticians.
>>>>
Now for SG customers, there is more. We generate the Transaction sprocs from the SQL Catalogue tables ... with a number of enhancements that I can’t divulge. (Just so that you know such is possible ... it is the next step for IDEs.) So the developers only have to check-and-change, rather than write sprocs.
<<<<
> Or, wait, such a transaction would be coded in a "simple" way (insert
> into Reading, insert into ReadingMeasure)
(I assume that is bracketed by BEGIN/COMMT TRAN. And that is not the Batch Transaction discussed.)
Yes, that is properly called a Naïve Transaction. Of course it does not mean basic OLTP requirements, and it does not observe ACID issues, so it is sub-standard. But as a naïve or simple transaction, that is fine.
The ACID/OLTP Standard (Not the SG OLTP Standard, which does more) for that Transaction, is given above.
> and then two things may
> happen: it is either executed before the batch transaction has "reached"
> Reading, in which case it would succeed (and then it would be updated by
> the batch transaction);
Correct.
That will happen for either the Naïve Transaction or the OLTP Standrad-compliant Transaction.
The Naïve Transaction may additionally fail due to Reading[ PK ] NOT_EXISTing, which causes (a) locks held, and (b) an anchor on the TransactionLog, which is then released, all of which is easy to avoid.
> or it is executed after the batch transaction
> has updated Reading, hence inserting the new reading would raise
> a foreign key violation (and it should be retried with the new key).
Correct.
That will happen for either the Naïve Transaction or the OLTP Standrad-compliant Transaction.
> Or something else?
No.
__________
> 2. You are describing an OLTP context, but you have claimed several
> times that the same database can serve both OLTP and OLAP workloads.
Yes, absolutely. Since 1993.
(There are some environments in which the customer *chooses* to host a replicate database, to offload reporters from the OLTP, which they do after I deliver the project. Note, this is for load-spread reasons, not for contention-reduction reasons, whereas for all other suppliers, it is for contention-reduction reasons. I still guarantee *uneventful* OLTP & OLAP on the same database.)
> Do
> you code analytic queries along the same principles to avoid
> locking too many records?
> code analytic queries
We don’t code analytic queries.
See if you can understand this (not being condescending, but we both know you are crossing a chasm).
__ FOPC[ Predicate ]-->RM[ Predicate ]-->SQL[ Predicate ]
____ Predicate{ Unary | Binary }
____ Unary = full existential reality
____ Binary = relations between Facts (not mathematical relations)
__ SQL[ Predicate ]--> ReportToolSimple
__ *NO* “Ontology”, because the reality of existence of every object is defined in the catalogue
____ sp_help <Object> reports it, a ReportTool picks it up
__ *NO* “diskwipshun logicks”, because the entire database is described in terms of Predicates
____ which are clearly visible in the Logic Map (IDEF1X Data Model).
>>>>
I have a simpler form of the Logic Map in IDEF1R, which users prefer over the IDEF1X/Attribute level. That form is the proper replacement for self-crippling *ERD”, that is programmed into young minds at all “universities” as “relational”. Except for one professor at Udine, who has switched to IDEF1X.
<<<<
Other than a great DM in IDEF1X and a Data Dictionary (easy to produce from ERwin), I give them nothing. I help the actual users to choose a Report Tool that is appropriate. Any simple Report tool such as CrystalReports will suffice. I give them access via Excel/Numbers. BusinessObjects and such (six figures, heavy duty report tools that implement an “universe”, which is definitely needed for non-databases such as RFS) are totally unnecessary. They hate me, same as the academics, and for the same reasons, here they provide a method of overcoming the filth of RFS, which I never need.
Sure, there are some clever things I do, such as eliminate the need for “pivot” tables, but I don’t provide code (I can!), I just show them how to produce a pivot report in SQL and CrystalReports.
Likewise, RFS knuckle-draggers DO need to *code* queries (think: every link in a chain has to be navigated, and navigated correctly, due to NOT having RM/Access Path Independence), and fiddle around for a week or three to get it to work. In contrast, I guarantee that any report requirement can be fulfilled with a single SELECT command.
I don’t even use Temporary tables (except for reformatting imported files). Never have.
Likewise, I give all forms of hierarchies in the data (refer the Bill of Materials doc). Developers who follow the academics, or who think Celko is a “guru”, pour their hierarchies in concrete, such that any insert has to rebuild the entire hierarchy (refer Adjacency List or Nested Set). (I shoot such people.) So I have to teach them about what a hierarchy really is; how it is beautifully implemented in the /RM?, and this Rdb. It needs recursion, so the orangutangs that view the entire universe through the myopic OO lens; CRUD; poisissytence; etc, need education to introduce them to the Wonderful World of SQL Power. But not code.
> along the same principles to avoid
> locking too many records?
The whole point of the Batch Transaction is that it limits the number of ISOLATED rows, which means limited locks (as distinct from rows or records, because we lock pages not rows/records). The server has Row Level Locking, I have never had the need. I have converted (the typical Version 2 project) many RFS type systems, that had to have Row Level Locking to even run at all, to ordinary Relational and Page locking. Thus the locks are far fewer than the rows updated.
So no, the Batch Transaction does NOT hold many locks. And due to the tight structure, whatever locks that are held, for 100 max rows, are held for the shortest duration.
> OLAP Reports
Why do you worry about locks in OLAP reports ?
Who cares.
Read locks are extremely short, and transient (held for the duration of the read operation, not the Transaction). We can’t control Read locks, even implicitly
__ (sort-of exception ISOLATION LEVEL 0 “Dirty Reads”)
I certainly do not wish to take even a fragment of the Stonebraker Disneyesque mantra for slobbering idiots, but since you are familiar with it ...
__ Readers never block writers
__ Readers are blocked by uncommitted changes TO THE PAGE only
the longest duration of which is that of the longest OLTP ACID Transaction that touches that page, which is the shortest possible, by design.
> > The corollary to that Commandment is of course:
> > __ Thous Shalt Not Begin Anything That Will Fail, That Thou Canst Not
> > Complete.
>
> That is worth the "optimistic" label!
Certainly, if one is attached to Truth, one has Reason for Faith; Hope; and Charity. Genuine optimism (as distinct from the new age poppycock of “positive thinking”) is founded on that Hope.
But still, that is not Optimistic Locking. And in this post, there is nothing about Optimistic Locking.
> > But we don’t start that “phase” until the previous “phase” of VALIDATE
> > completes successfully. And we have no time spans in-between
> > operations.
>
> See above. Not clear how you'd achieve that.
Done.
________
If you are interested in how Ordinary Locking works, here an overview of Locks and the Lock Manager in an OLTP Server. Again, please, before reading, erase all the false notions (“1PL”; “2PL”; “S2PL”; “C2PL”; “SS2PL”; and the like) from your mind. Notice very gravely that you guys have no idea about locking or a lock manager, you have only the problem-fix layer on top of the MVCC monstrosity (because some things in that fantasy can only be resolved with a Lock Manager).
This one is core-unchanged since 1984, enhancements added over time to cater for new data structures, only. Private papers and patents in the early days, but now we can give it in documents such as this to the SAP/Sybase community. Just don’t tell the PissGris droolers about it. Pages 1 & 2 only:
____
https://www.softwaregems.com.au/Documents/Article/Sybase%20Lock%20Manager/Sybase%20Lock%20Manager.pdf
__ To understand this properly, you need to understand ACID properly, in both the server context and the Transaction context. Developers who code for such servers have to know ACID, thus ACID is not explained.
____ Again, MVCC systems do not, cannot, provide ACID, and developers on such systems have no clue about ACID; how simple and powerful it is; etc. So look carefully for terms such as ISOLATION level, and genuine compliance to that.
__ All locks are in-memory
____ All pages that are touched are in-memory (the data caches)
__ Latches are fast locks, for short-duration operations (eg. not disk)
__ Spinlocks are for server-internal structures only, wherein the Engine “spins” while waiting for a fast-lock (the true database server does not yield the CPU)
__ Cursor locks (an addition type) are not shown.
__ DRL and DOL are a new [physical] table type to provide even faster file services (RecordID). Which again I have no use for, but I have decades of experience fixing. Comments about such can be ignored.
__ Don’t skip the link to the Server Monitoring Report, at least § Lock Manager.
Come to think of it. Since you are concerned about Read locks and their effects; etc (as opposed to Write locks, which is what is of concern in the above discussion). This happens to be a perfect, real world, example, which can be used for further discussion. Because it has Before::After stats and comparisons, at the metric level, after fixing a SAN configuration error, which resulted in slower I/Os all around. I will let you examine the report and ask questions, eg. /How come the Read Locks dropped by 91% (or how come the slow I/O resulted in 1,100% increase in Read Locks) ?/
I have described the columns in some post in this or the other thread.
Cheers
Derek