Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The Theoretical Void in the Relational Database Space - Transactions

272 views
Skip to first unread message

Derek Asirvadem

unread,
Feb 25, 2015, 5:39:23 AM2/25/15
to

> On Tuesday, 24 February 2015 19:24:38 UTC+11, Erwin wrote:Avoiding any
> > Op maandag 23 februari 2015 18:27:53 UTC+1 schreef Nicola:
>
> > In practice, that means
> > that single inserts/updates/deletes are too low-level wrt the semantics
> > of the database.
>
> Unfortunately, it is the only level that certain people are capable of understanding.

Here, once again, in case the evidence in the threads of the last few weeks were not enough, is further evidence of two things, at the same time, in the one comment:

1. the theoretical void in the Relational Database space

2. the disgusting dishonesty of the rat-faced ones who allege that they are theoreticians, who allege that they understand this Relational database space

Now, as I have stated many times, I really do not mind if these creatures eat pig poop, or sacrifice their children to Moloch. But I do have a problem when they feed humans pig poop, or when they try and sacrifice our children to their pagan lusts.

That might take a bit of explanation. We need to go back to the original context, in order not to lose *meaning*. I will start with answering the original question, which no theoretician was able to answer. Because, once again, the space is void.

The two points above will be answered in the course of my response.

To start with, keep in mind that a classic trick that frauds use (the void is filled with them, such as our maggoty friend), when attacking rules (laws, science, human logic) is to plant little doubts, little exceptions, to bring it into question. This is the pharisaic method, used for millennia, to subvert the truth, to justify the exact opposite of the rule. They use it to legitimise their child sacrifices to Moloch, which is specifically prohibited, and prohibited many, many, times.

Second, keep in mind that the worst ("best" for the subverters) lies have a component of truth in them. If it were a complete lie, we would dismiss it entirely. But humans get tricked because we do not dismiss the package on the basis that we accept the component of truth, and thus we are well on our (their) way to accepting the whole package, which is a lie. Therefore, watch out for small truths, which are really dirty great big disgusting lies.

The way you can tell the difference is this. Let me call upon a principle that is famously used in American Law, and which the peerless Dr E F Codd famously had to use against these same philistines. Eg:
____ I swear to tell the truth, the whole truth, and nothing but the truth, so help me God.
The fraud will not tell you the whole truth. In its trickery, the maggot will avoid telling the whole story. It plants a seed of doubt, such as the one-liner above, out of context, and dragging with it a great big undeclared lie. The honest person will supply the whole picture, not fragments of it (which can be misused). The fraud is the master of fragments, and manipulates them like a shell game.

The truth will set you free.

So what is the truth ?

> On Tuesday, 24 February 2015 04:27:53 UTC+11, Nicola wrote:
> > In article <20150215200758.4...@speakeasy.net>,
> "James K. Lowden" <jklo...@speakeasy.net> wrote:
>
> > The rule is that a security is represented by a pair of rows, one in
> > Securities and one in the subtype table. There *must* be two rows,
> > and only two rows, and the subtype table must be the one indicated by
> > Securities.Type.
>
> I like to think of SQL's insert, update and delete as the (quite
> powerful) assembly-level instructions of a data language.

Data sub-language, as declared. It is not a full language. It is expected to be called from some high-level language on the client side.

Yes, it is low-level, as is required to operate on base relations. I think you understand, and correctly, that all such instructions are *Atomic*. It is stupid to fault a declared low-level data sub-language, which is verbose by its very definition, for being unable to parse and perform high-level instructions, or for being be non-verbose.

That is not to say that high-level instructions cannot be performed (we don't write Assembler any more, we write in C, and that is compiled into Assembler) ...

> If you can
> design a model in which a single insert, a single update or a single
> delete never causes any integrity violation, well, go with them.

That is a theoretical possibility, but non-existence in practice. Same as the empty set.

> But in many (most?) cases,
> a well-designed database requires modifying more
> than one fact at a time to preserve consistency.

In every practical case.

> In practice, that means
> that single inserts/updates/deletes are too low-level wrt the semantics
> of the database. Hence, their execution must be forbidden (by revoking
> the corresponding privileges in SQL from the users accessing the
> database), and more complex "primitives" must be defined.

Absolutely correct.

I wish the theoreticians who allege knowledge of this space, and the pig poop-eating cancerous agents who write books, would understand that. You have a better understanding of the requirements of this space than they do. Nota bene: that is precisely why the devil-child is attempting to subvert your thinking. So hold onto that context, because it is correct, and set aside the pig poop lies for a while.

Two answers, or one answer, but on two levels.

> These should
> be given along with the logical schema and implemented as user-defined
> functions (assuming that such functions are run atomically), whose
> execution is granted to the above-mentioned users instead of the raw SQL
> instructions.

Brilliant. Perfectly correct.

You have the seed. Those "functions", those "primitives", have to be *Atomic*, executed in toto, xor not at all. Those "functions" are the high-level instructions required to modify the database (which are implemented in a low-level language, same as when we write code in C, but it executes only as low-level instructions).

You also have the seed of another important requirement, though not expressed. Database *Consistency*. The Consistency of the database *as a whole* is not to be taken as a Constraint here, or another Constraint there, each in isolation, it is to be taken as all the Constraints together. That is what predicates the technical requirement for the *Atomicity* of the "function".

So to recap your statement, to paraphrase it in technical terms, before proceeding.

For a well-designed database, the Constraints are not only the collection of base relation definitions; not only the collection of Constraints on those base relations; it is also the collection of those "Functions", which are Atomic.

That means, those "functions" are a form of Constraints.

And if they are implemented within the database itself (ie. self-contained, as opposed to code outside the database), they are Database Constraints.

And if they are declared (as you correctly state they should be), they are Declarative Constraints.

And the corollary needs to be stated, which you have done, that users are also prevented from making direct low-level updates to the database, all updates must be performed via those "functions", and only those functions.

The result is a Consistent Database.

Hallelujah!

Avoiding any of that, results in a collection of pig poop instead of a Consistent Database. Note that the theoreticians who allege knowledge of this space revel in that pig poop, and demand it as a requirement. Note that that is what rat-face is doing, seducing you back into the land of pig poop.

----

Nicola, welcome to the real universe of Relational Databases. Let me take you on quick tour.

No offence intended, but it must be said, because it is a fact, here is a perfect example of my comment elsewhere, that theoreticians are forever re-inventing the wheel, due to total and abject ignorance of the Relational Database space in the real universe, ignorance that we already have that wheel, and have it in a very mature form.

-- Start of Tour --

1960
ISAM. Pre-HM. ISAM, plus plus. IBM implemented Transactions.

1960-66
The Hierarchical Model. HDBMS. IBM/IMS. CICS/TCP. Fully Transactional.

1970
The Relational Model. (Theory only, not implementation)

IBM/IMS. Queries via System R. Updates via CICS/TCP Transactions. IBM invented the ACID Transaction, and perfected it in the Hierarchical Database context. TCP stands for Transaction Control Protocol.

>>>> ============================================================================
Henceforth, Transactions must have ACID Properties

___ Transactions are [A]tomic. In toto xor Null.

___ They must start, and they must leave, the database in a state of [C]onsistency.
_____ That Consistency is of course implemented via *all* the Constraints
_____ In order to start, and leave, the db in a Consistent state, every command must be Consistent with the Constraints

___ Transactions are executed in [I]solation. Ie. the internal changes therein are Isolated from *other database users*, until it is complete.
_____(This does not mean, as some folks think, the transaction changes are performed in isolation, with no relation to the Constraints)

___ Transactions are [D]urable, they persist in the event that the system crashes; breaks; is unavailable.

[D] is provided by the platform alone, [A][C][I] is provided by program code, in concert with facilities provided by the platform.

This remains true to this day.
<<<< ============================================================================

1970-1980
All other DBMS, other HDBMS, NDBMS, Network, TOTAL, Cullinane, Tandem Non-Stop, Britton-Lee, all provided full ACID Transactions, and the client-side facilities required to support them. Even the non-DBMS systems, which means ISAM plus plus, provided the system-wide locking required, and Transactions. Those who didn't or couldn't went out of business. All proprietary, of course, in those days.

IBM implemented SQL, the successor to System R. Queries and updates via the one language.

Somewhere in that period, IBM gave us the courtesy, demanded by Codd's RM, which was public domain, and moved SQL into the public domain.

Transaction [I]solation was expressed in specifc detail, in order to minimise collision with non-transactional queries, in order to maximise concurrency. SQL provided Full ACID Transactions.

Note that we put men on the Moon using on-board 8-bit processors, scaled-down NDBMS, and ACID Transactions.

1980
DEC implemented RDB, the first DBMS with genuine Relational capability. It was great, but incomplete Relationally, and OLTP was slow, because in those days, the underlying ISAM, was slow. The queries were fully Relational.

1984
Sybase, the first true Relational DBMS. Full OLTP with full ACID Transactions. No surprise, because its lineage was Britton-Lee. The slow ISAM had been tamed, partly by the increasing speed of disks, and partly by Sybase's brilliant implementation of ISAM in a database context. Unequalled to this day.

1987
Other SQL and Non-SQL providers followed. Some with full ACID Transactions, others with only partial support. Oracle to this day is neither SQL compliant, nor ACID compliant.

-- End of Tour --

======================
== ACID Transaction ==
================================================================================
What you call "functions" and "primitives", are in fact ACID Transactions.
We have had them since 1965, and we have had them in the data sub-language for Relational databases since 1984.
================================================================================

Now it must be mentioned, in order to be complete (I won't detail it here unless requested). There are rules and standards that govern ACID Transactions, and *additional* rules that govern high concurrency, that such must be implemented in all of:
a. implemented in the database,
b. implemented in the Transaction code, and
c. using facilities provided by the platform for [a][b].
Without which, neither OLTP, nor ACID Transactions, nor high concurrency, can be achieved.

Ie. any string of SQL is not magically transformed into high-concurrency; ACID; OLTP, simply because it uses a platform that provides high-concurrency; ACID; OLTP. Nothing magical about it, it has to be written.

=======================
== Open Architecture ==
================================================================================
The whole scenario you describe as "well-designed database", including the prohibitions, is what we call Open Architecture, we have had it since 1984, perfected by 1990, and unchanged since then. And it is Standard.

Codd wrote a lot about the subject, and we (vendors, plus implementers, who understood the Relational model, as an implementation), implemented it.

We implement the Standard as a matter of course. Any implementer who does NOT implement the standard is a novice, a boffin.

Have a look at this, it is a high-level overview:
http://www.softwaregems.com.au/Documents/Article/Application%20Architecture/Open%20Architecture.pdf
================================================================================

Once again, on yet TWO more subjects (ACID Transactions and Open Architecture), the theory covering the Relational Database space is void. We have what we have, purely because the vendors implemented it. There are no papers or theory published on those subjects, only professional documents.

Stated otherwise, the theoreticians who allege to be serving this space, are completely ignorant of this space, a knowledge of which is pre-requisite to the supply of the alleged service. Without which they either invent wheels that we do not need, or re-invent wheels that we already have.

Congratulations on concluding, all on your own, that those two wheels, are an absolute necessity for Relational Database in every practical instance. Those two wheels have been invented and perfected already. Unbeknown to the theoreticians, due to their total ignorance of this space.

They really should be teaching the truth to earnest students such as you. They teach pig poop. Which is why you have to form these decisions for yourself.

I won't detail it here, but what they teach about what SQL can and cannot do, is a huge mountain of pig poop. Filthy lies. Hence my open challenge with confidence: show me something in an honest Relational database that you cannot do in SQL, and I will do it for you, and do it easily.

----

It must be said, on the Transaction issue, the theory imbeciles have recently started forays into pig poop alternatives, slyly calling such by names other than "transaction". But these are primitive, infantile, pre-1970 forms, that have not been thought through, let alone tested using more than one relation. Nothing is defined yet. We are not going to give up our Transactions for ever-changing, squirming, mass of pig poop.

There are many unscientific "theoretical papers" published on the infantile possibilities. Some of them have been implemented in the freeware/shareware/vaporware that constitutes Non-SQL programs (they are not platforms by any means). Which tells us a lot about the thousands of theoreticians who such write code, that is forever being replaced. An orgy of wheel-re-inventing, and the wheels thus far are single slices of logs.

----

Back to the JKL comment.

> > The rule is that a security is represented by a pair of rows, one in
> > Securities and one in the subtype table. There *must* be two rows,
> > and only two rows, and the subtype table must be the one indicated by
> > Securities.Type.

As detailed in my response to James, and as you have implied, that easily done, just implement a Transaction, containing the two INSERTs, which as a Transaction, is Atomic.

Note, there should be one Transaction per Subtype, and nothing that inserts the Basetype alone.

(James has been seduced by the fraud of the maggots, caught up in the anti-relational "mutual dependency"; denial of hierarchy; etc. But that is a separate issue.)

----

Back to big-ears comment.

> > In practice, that means
> > that single inserts/updates/deletes are too low-level wrt the semantics
> > of the database.
>
> Unfortunately, it is the only level that certain people are capable of understanding.

Now I refer to:
> 2. the disgusting dishonesty of the rat-faced ones who allege that they are theoreticians, and who allege that they understand this Relational database space

And recall:
> Second, keep in mind that the worst ("best" for the subverters) lies have a component of truth in them. If it were a complete lie, we would dismiss it entirely. But humans get tricked because we accept the component of truth, and thus we are well on our (their) way to accept the whole package, which is a lie. Therefore, watch out for small truths, which are really great big disgusting lies.

So what is the lie ?

Part 1
Maggot has excised the context, and taken your single statement out of context. Then, treating it in isolation, it treats it as a stand-alone fact. If the context:
> But in many (most?) cases,
> a well-designed database requires modifying more
> than one fact at a time to preserve consistency.
were not lost, the lie falls apart. Because you are right, all updates to the database must be Transactional. Full stop. End of story.

Part 2
The sow sucker then presents the big lie, undeclared, hidden inside the little truth, so that you bite, that causes you to hold the whole package.

If the lie works, now he has you on the back foot, questioning the veracity of your own statement. Doubt. Installed by fraud.

Of course, to the extent that big snout is speaking of himself and the other maggot-ridden theoreticians who allege to know the Relational database space, the statement may well be true: many of them only know one way, and one very limited way, to do anything.

Part 3
To the extent that he means humans, of course the statement is false. Most humans can handle context. They answer questions in context, there is no need to lay out all possibilities and options, and then address the particular question that is being asked. Only genuine teachers do that, and only in a genuine teaching context. So the expectation is a dishonest one.

Part 4
The cursed ones exploit such "missing" elements, and then mount an argument on the basis that what is "missing" must be unknown. Much like their treatment of the Null problem. Total pharisaic argument. Intended to subvert the Law (all updates via Atomic Transaction only). A fraud based on a dishonest element.

So what is the truth ?

The truth is this.

1. First and foremost, the database must be secured, by permitting only ACID Transactions (Atomic being part thereof) to be used to update, and all direct updates must be prevented. As detailed above. No backing off this point.

___ Otherwise you do not have a database, you have the Record Filing Systems that are so beloved of the maggots, which is what squirmin irwin is tricking you into. So that you then have the same disease that they have, passed on by rats, the plague.

___ Note that the whole maggot-ridden crew, from C J Date down, their whole isolated single-relation concept of the universe, is based on "Dropping ACID".

2. Second, I have no problem at all, that in theory, and only in theory, a theoretician should be able to insert/update/delete more than one tuple. Six million or some large number approaching infinity. It is fictitious anyway, because the algebra is not running on a box somewhere, it exists only on paper, it is hypothetical. I have no wish at all to deny them their fictions or the pieces of paper. No one does. Theoreticians need their freedom. (The honest ones produce something, at least after a decade or two, but let's not get side-tracked.)

3. But that is not the practical world, the real universe. Which has limits, which is governed by the laws of physics. And from your comments, you are concerned with practical world. Good.

That is a demand for theoreticians in every other space, but unfortunately rare among theoreticians in this space.

4. Now gratefully, due to the efforts of the vendors, scientists, engineers, we have far more capabilities in RDBMS than the maggots can imagine. In the normal maintenance of a database, which exists for a somewhat longer duration than the piece of paper that some algebra is scribbled on, we have regular needs to update some large number of rows, sometimes entire tables.

Note that the commercial SQL platforms are true Set Processing engines (the non-SQLs are neither platforms nor engines, but they do attempt an hilarious form of "set processing".) The point is, the commercial SQL platforms provide full Relational capability [2], whilst being limited to the finite resources of the system [3]. Ie. updating six billion rows (the relevant number these days, with no magic attached), is no sweat, as long as the system has the resources allocated and configured for such. And as long as you have the technical knowledge to perform the task.

5. An important requirement is this. Note that this is foreign to, completely beyond the capabilities of the maggots, as evidenced by the hairy pig smout comment. Those of us who provide a service in the industry, and get paid for it, those of us who are undamaged humans, can handle not just one but TWO contexts at the same time. So we perform our large update as required for the Relational Set Processing context,
--AND-- maintain the Transactional context,
--AND-- do that within the limits of the configured system,
--AND-- do so without hindering the other users of the system (ie. not locking up the server for three days while our single transaction executes).

Yes!

Tis possible!

Only for humans.

I will give three quite different scenarios. I repeat, all of them take into account and utilise [1][2][3][4], and are three renditions of [5].

I - Bulk Load

The term is well established, so I am not going to use a different term. Note that it does not mean a single table, it may well involve a "transaction" of several tables.

This is typical for implementing version upgrades to the db and the app at the same time. All Big Iron software version changes are distributed to their customers on this basis.

Say that I have a number of structural changes to the database. I might schedule a very small downtime window, and apply all those changes at once (within the command file, they are serial, hierarchical), and schedule the application boys to cut over the new release of the application at the same time. In this scenario, it is far, far better to unload/change/reload the set of tables, than to change them within the database using SQL (which would operate on the entire table). It is much faster than via the DBMS because the various facilities therein, namely transaction logging, are not invoked.

New DDL including changed Constraints are part of the change.

The change is done while streaming the file, typically using awk, so that is very, very fast. Of course the scripts are written and tested before hand, and of course, all integrity, is maintained whilst the changes are made. In any case, when the load is performed, the DBMS checks that the new rows inserted do not violate the constraints.

R{A,B,C,D} and S{P,Q,R} out
R{A,B,C,X}, S{P,Q} and T{D,R} in

II - Alter Table

This scenario is typical for small version changes, or bug fixes. But the scope is still a large data change. We can't "turn off" logging on a production database, but we can minimise it, and execute the changes such that users remain online (no downtime as per [I] ), and remain active as long as their transactions do not collide with the *specific* rows being changed.

It would be absolutely stupid to update six million rows in a single command, while transaction logging is in effect, because the resources are finite, and it will take ages. Thus such things are prevented. When maintenance tasks such as this scenario are required, logging is minimised, the task is executed, backups are taken, and logging is geared up again. All of which is online (except for Orable of course).

Standard SQL is used, where the target set affected is, say six billion rows, of a table. Six million is an irrelevant number these days, I can insert that in a real database on a PC (smallest possible server, a ddemo box) in less than three seconds.

I might do things such as DROP INDEX, in order to increase the speed of the operation, and then CREATE INDEX when it is complete. I might LOCK TABLE for the duration, to Isolate users from data that is NOT Consistent. Etc. But such things are options, not demanded one way or another.

R{A,B,C,D} and S{P,Q,R} changed to R{A,B,C,X}, S{P,Q} and T{D,R}, in place.


III - Batch Transaction

Now in both the above scenarios, the transactional integrity is maintained by the DBA, who understands the database, as documented in the model, plus, plus. The commands used are Standard SQL (ie. implementation of the RM, the original Codd RA), plus any extensions that the vendor might supply to make life easier.

This option maintains Transaction Integrity at all times. Ie. The tables are not updated directly, Set Processing is not used. The use of the database is completely unhindered, I don't even have to post a notice to the users.

A typical case would be the DayEnd process, where I have two million Portfolios, each containing an average of two thousand Securities, each of some number (an attribute, not a row): that is called Exposure to each Security. At Day End, when the market closes, and ClosingPrice has been established externally, first all two million Security.ClosingPrice are updated with the Market ClosingPrice. Then all the Portfolio.Security.Exposures need to be updated. It is a bit more complex, the description here is simplified, that complexity, and audit requirements, demand that a Transaction is used.

Recall, we already have all the transactions that are necessary for the system.

Now the brain-dead method to affect the task is to write a procedural program (ie. NOT Set processing), using a cursor, to "walk" the Portfolios, then "walk" the Securities within each portfolio. That is precisely what the imbeciles in the joke ORM system did, and that is the way the brainless maggot followed, in order to enjoy a bit sea-sickness.

I don't allow cursors.

I have a Batch Queue, that is used for various purposes. Such as an online client program that does not want to wait inline for a Transaction to return. Anyone can submit Transaction calls to the Batch Queue, same as executing it inline:
___ INSERT "EXEC TransactionName_sp, Parm_1 ... Parm_n"

The DayEnd Batch Job (yes, RJE remains with us) consists of a single INSERT-SELECT that identifies the Portfolio.Security as the relevant Parm, and inserts a row into the Batch Queue for each. This is the For-Each on steroids, and without implementing it in the algebra, the way stupid pig poop eating slaves do.

(I have developed further methods that decrease the overall elapsed time from 120 minutes to 15 minutes to sub-minute, but I can't give the shop away. Point is, batch processing is reduced to a non-issue.)

When the last Transaction is completed, ie. when the queues are empty, voila, the DayEnd job is complete.

Four billion transactions completed in minutes. Transactionally. Without cursors.

Variation. Likewise, for normal OLTP, in order to support high-concurrency, the maximum number of rows that an OLTP Transaction is allowed to update is 100. If you need to update six million rows, as demanded by some silly business transaction, that is fine, but you have to break that up into max 100 rows. The code required is an additional four (count 'em) lines.

----

So there you have it. Three quite different scenarios. I repeat, all of them take [1][2][3][4] into account, and are three renditions of [5]. Only for humans, the implementers in the 99% of the field. The 1% are ignorant of all that. The devil's child is not only ignorant of all that, it thinks it can't be done.

----

So the notion:

> Unfortunately, it [single inserts/updates/deletes] is the only level that certain people are capable of understanding.

besides being dishonest, fraudulent, etc, as explained above, is hilariously stupid. They can't even write the required modification. Because they have not extended the algebra since Codd. They only understand single-relation modifications.

----

> If the *only* modification permitted on R(A,B) and S(A,C) is through a
> function [ACID Transaction] update(a,b,c) that inserts (a,b) in R and (a,c) in S, you don't
> need any explicit foreign key clause in your SQL tables for the purpose
> of preserving referential integrity.

No.

Due to several, entirely different and independent issues, do not get them mixed up.

1. You need explicit Declarations in order to do two things:
a. to establish the relationship (in the catalogue, etc) so that the database and the definitions are self-cpntained.
b. to have the RDBMS platform enforce Declarative Referential Integrity.

So that any attempt to override the Transactions, that inserts an incorrect row, will fail. Now you can consider the database to be /partly/ secured.

2. Likewise, the prohibited direct-updates, may be "enough". But that too, is separate, independent.

That secures a second /part/ of the database. [1] and [2] go hand-in-hand, but they are separate, independent, like a good marriage.

3. Likewise, the permitted functions-to-users, may be "enough". But that too, is separate, independent.

That secures a third /part/ of the database. User Access Control.

No Auditor will pass a database that does not have the relationships defined in, or enforced by, the server; that direct updates are not prevented; that transactional updates are not granted explicitly. That is inviolate. Refer again to my Open Architecture document.

----

Your argument is the one that the imbeciles who build OO/ORM monoliths, as well as the theoreticians who build Record Filing Systems, use "database integrity is all contained in the application program", to justify NOT implementing [1][2][3]. Which is a total farce, and never works. Separate to being invalid, illegal, sub-standard, and non-relational.


----

4. Separately again, it is stupid to attempt to update the database with values that will fail. There is a fair amount of resources that are locked up, and holding other users up, progressing the transaction forward, to the point of failure, and then rolling it back, before the resources can be released. All of which can be avoided. Therefore, by virtue of OLTP Standards, every program is required to:
- check every row that it intends to update/delete for existence, validity, etc, and
- check every row that it intends to insert for non-existence, validity, etc,
BEFORE attempting the Transaction.

Unnecessary resource use, and particular unnecessary locking (*implemented* low concurrency), is to be avoided.

That [4] should not be avoided, on the basis that the Transactions are perfect and they will prevent an illegal update, therefore "enough", therefore other constraints are not required. No, they are all required, for different reasons.

(There is more required for high concurrency and ACID, but I won't provide that unless asked.)

> Gee, thanks for all that.

You are welcome.

Don't forget that I congratulated you for exceeding the pig poop your teachers have fed you, all on your own, against the tsunami of lies of your colleagues, such as squirmin irwin. Keep it up.

Cheers
Derek

Ruben Safir

unread,
Feb 26, 2015, 7:05:46 AM2/26/15
to
Derek Asirvadem <derek.a...@gmail.com> wrote:
>
>> On Tuesday, 24 February 2015 19:24:38 UTC+11, Erwin wrote:Avoiding any
>> > Op maandag 23 februari 2015 18:27:53 UTC+1 schreef Nicola:
>>
>> > In practice, that means
>> > that single inserts/updates/deletes are too low-level wrt the semantics
>> > of the database.
>>
>> Unfortunately, it is the only level that certain people are capable of understanding.
>
> Here, once again, in case the evidence in the threads of the last few weeks were not enough, is further evidence of two things, at the same time, in the one comment:
>
> 1. the theoretical void in the Relational Database space
>
> 2. the disgusting dishonesty of the rat-faced ones who allege that they are theoreticians, who allege that they understand this Relational database space
>
> Now, as I have stated many times, I really do not mind if these creatures eat pig poop, or sacrifice their children to Moloch. But I do have a problem when they feed humans pig poop, or when they try and sacrifice our children to their pagan lusts.
>
> That might take a bit of explanation. We need to go back to the original context, in order not to lose *meaning*. I will start with answering the original question, which no theoretician was able to answer. Because, once again, the space is void.
>
> The two points above will be answered in the course of my response.
>
> To start with, keep in mind that a classic trick that frauds use (the void is filled with them, such as our maggoty friend), when attacking rules (laws, science, human logic) is to plant little doubts, little exceptions, to bring it into question. This is the pharisaic method, used for millennia, to subvert the truth, to justify the exact opposite of the rule. They use it to legitimise their child sacrifices to Moloch, which is specifically prohibited, and prohibited many, many, times.
>
> Second, keep in mind that the worst ("best" for the subverters) lies have a component of truth in them. If it were a complete lie, we would dismiss it entirely. But humans get tricked because we do not dismiss the package on the basis that we accept the component of truth, and thus we are well on our (their) way to accepting the whole package, which is a lie. Therefore, watch out for small truths, which are really dirty great big disgusting lies.



This is bullshit. Did you mis your risperidal today?


/dev/null

Derek Asirvadem

unread,
Feb 27, 2015, 7:58:05 PM2/27/15
to
> On Thursday, 26 February 2015 23:05:46 UTC+11, Ruben Safir wrote:
>
> This is bullshit. Did you mis your risperidal today?

It's ok, darling. You need:

a. a knowledge of history (to understand the small part you have quoted)

b. more that a short attention span (to understand the part you have NOT quoted)

c. technical education to understand the the technical content

Insults prove that my post exceeds your capacity on all three fronts.

Maybe try to "normalise". Just one relation. Or just one instance of one relation.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 21, 2021, 1:31:22 AM6/21/21
to
The following thread is relevant to this one. It provides a discussion in an ACID Transaction context, which MVCC does not have, and cannot do (MVCC is Anti-ACID; MVCC is Anti-Transaction).

> Batch Transaction

Note that the [III - Batch Transaction] described in the above thread is for simple OLTP systems, that already have proper ACID Transactions. Whereas the [Batch Transaction] defined in the link below is for coding the equivalent of CASCADE, which is an infantile, fantasy concept that is not permitted on commercial SQL platforms. Thius code provides the proper method to move the entire tree from OldKey to NewKey.

__ https://groups.google.com/g/comp.databases.theory/c/LSoYp9wrv0M

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 27, 2021, 10:42:21 PM6/27/21
to
Confirming, yet again, that the theoreticians who allege to be serving this field, are utterly clueless about OLTP/Transaction/ACID, that we have had for FORTY YEARS in commercial RDBMS SQL.

The following thread is relevant to this one. It provides a discussion in the full OLTP/Transaction/ACID context, which MVCC does not have, and cannot do (MV-non-CC is Anti-ACID; MV-non-CC is Anti-Transaction; MV-non-CC is Anti-OLTP ... the freaks use redefined definitions to fraudulently appear to provide fragments of ACID only, minus the full OLTP/Transaction/ACID context).

Daniel has started defining a template for OLTP/Transaction/ACID stored procs.

__ https://groups.google.com/g/comp.databases.theory/c/BNL-TwgMfPY

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Aug 21, 2021, 11:55:14 PM8/21/21
to
Nicola

In the /Stored Proc for OLTP Transactions/ thread, you posed questions re "Serialisation" and "Schedules", which I found very odd:
- why on earth should a developer or a DBA be concerned about such things (internal operation of the server) ?
- a Schedule implies single-threaded operation (we have been fully multi-threaded since 1975, not to mention Sybase is massively so at all levels)

Could you please enlighten me,, in a few words. Using only the standard meanings of terms, as asserted (not any academic re-definitions).

I found this, it appears it is being taught at Berkeley, as “computer science” about “databases”. Why, for what purpose ???

https://dsf.berkeley.edu/dbcourse/lecs/22cc.pdf

Cheers
Derek

Nicola

unread,
Aug 23, 2021, 8:17:35 AM8/23/21
to
On 2021-08-22, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
> Nicola
>
> In the /Stored Proc for OLTP Transactions/ thread, you posed questions
> re "Serialisation" and "Schedules", which I found very odd:

Apparently, we are using those terms with different meanings.

> - why on earth should a developer or a DBA be concerned about such
> things (internal operation of the server) ?

Since I am not sure what you mean exactly, I can only say that anyone
working with a DBMS at a professional level should be intimately
familiar with the implementation details of such DBMS.

> - a Schedule implies single-threaded operation (we have been fully
> multi-threaded since 1975, not to mention Sybase is massively so at
> all levels)

No idea what you mean, but it's certainly not what I mean by schedule.
But the use of my term is academic, hence irrelevant.

> I found this, it appears it is being taught at Berkeley, as “computer
> science” about “databases”. Why, for what purpose ???
>
> https://dsf.berkeley.edu/dbcourse/lecs/22cc.pdf

What does that add to the discussion? You have extensively argued about
the sad state of affairs of academia and of the rest of the world,
except for a handful of privileged ones. You don't need to reiterate
those arguments.

Nicola

0 new messages