Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Incomplete specialization, again

63 views
Skip to first unread message

Nicola

unread,
Jun 9, 2021, 12:43:14 PM6/9/21
to
Derek,
this is a follow-up to our previous discussion. Since that thread is
already pretty long, and I have a relatively narrow question, I'm
starting a new thread.

The question concerns your model in this document, p. 8:

https://www.softwaregems.com.au/Documents/Article/Database/Relational%20Model/IDEF1X/Nicola%20IDEF1X%202.pdf

Among the rest, the model declares:

Person is 0-to-1 Employee

The textual predicates do not state it explicitly, but the model also
declares the converse relationship:

Employee is 1 Person

Besides:

Employee is an exclusive basetype, one of {FullTime, Consultant}

Your predicates do not state is explicitly, but the verbal phrase for
exclusive subtyping is "is". So, the model also declares:

EmployeeFullTime is 1 Employee
EmployeeConsultant is 1 Employee

"Each Basetype and Subtype pair should be perceived as a single logical
unit" (same doc., p. 5). So, it's fair to say, e.g., that an
EmployeeFullTime has a JobTitle and an EmployeeDate, and I can sensibly
query for the job titles of all full time employees. JobTitle and
EmployeeDate are descriptors of EmployeeFullTime.

But Person and Employee should not be perceived as a single logical
unit: as per your discussion in §3, that would be incoherent. So, does
an Employee have a NameLast or a BirthDate? If so, what in the model
does entitle me to state that?

For comparison, suppose that there is also a Hunting License entity, and
a relationship between Person and Hunting License:

Person has 0-to-1 Hunting License

Certainly, I would not say that a Hunting License has a NameLast or
a BirthDate or that NameLast and BirthDate are descriptors of Hunting
License.

Nicola

Derek Ignatius Asirvadem

unread,
Jun 10, 2021, 5:25:02 AM6/10/21
to
> On Thursday, 10 June 2021 at 02:43:14 UTC+10, Nicola wrote:

> Among the rest, the model declares:
>
> Person is 0-to-1 Employee
>
> The textual predicates do not state it explicitly, but the model also
> declares the converse relationship:
>
> Employee is 1 Person

>>>> To be Clear
Yes, of course.

First we have First Order Predicate Calculus.
We know that we can define anything in the universe, in FOPC terms.

The /Relational Model/ is founded on FOPC.
Therefore we know that we can define anything in the universe, in the RM in Relational terms, which is FOPC terms.

SQL is the current rendition of the Data Sublanguage defined in the /RM/, it fulfills the stated goals. (Non-compliant program suites that fraudulently use the label “sql” do not.)

IDEF1X merely renders FOPC[ RM[ Predicates ] ] diagrammatically.
Absolutely every Predicate [that is used] is rendered diagrammatically.
__Ok, you need my IDEF1X Extension for Additional Constraints; etc
Absolutely every Predicate [that is used] can be determined directly from the diagram.

Just as an FK declaration is declared only in the child, but very obviously and laughably “two-way” or more hilariously “bidirectional”, it can be read from the parent (reference) to the child (referent). If I tell you that Sally is Fred’s daughter, you know from that single Fact that Fred is Sally’s father.

Thus any Binary Predicate can be read “two ways”, and only the *Verb*, not the full text, is required on the relation line, because each of the binaries is diagrammatically connected. Usually only the parent->child *Verb* is shown, and the converse is simple enough to determine [by reversal of the elements] by the reader.

My Doc
I have not given all Predicates in text form, only the ones that I thought may be relevant, that may be argued.
*All the Predicates are in the diagram*.
<<<<

> Besides:
>
> Employee is an exclusive basetype, one of {FullTime, Consultant}
>
> Your predicates do not state is explicitly, but the verbal phrase for
> exclusive subtyping is "is". So, the model also declares:
>
> EmployeeFullTime is 1 Employee
> EmployeeConsultant is 1 Employee
>
> "Each Basetype and Subtype pair should be perceived as a single logical
> unit" (same doc., p. 5).

Yes, of course. Keep in mind, that is for the purpose of understanding what, in heavens name, is a Subtype.

>>>>
Now before we get into the next set of Qs. Stop and take a breath. Consider this. As an academic, you are heavily (and I mean heavily) programmed into thinking in terms of Fragments (as opposed to Atoms); single level (as opposed to multiple); simple idiotic truths (as opposed to a chain of truth); dis-integrated elements (as opposed to integration); ugliness (as opposed to beauty). That is the normal, you people have had fifty years of it, every single textbook.

By the Grace of God, you, a single academic has left the morass of pig poop, and a whole new world of the Real Relational Model/, and how it is really implemented, has opened up. But please understand, you are approaching it in small steps (transmission in increments), you do not have the privilege of a paid lecture or course from me (full consolidated transmission of the subject matter).

The issue you are experiencing here is, you have approached the beauty, which is the integration, of truth, here FOPC[ RM[ Predicate ] ], but due to your programmed manner of thinking, you are limited to seeing only the Fragments, and not the whole. So take a breath, rub your eyes, renounce the schizophrenic filth taught by (Date; Darwen; Fagin; Abitebul; Hull; Viannu; Navarthe; etc; and all “professors” who use their textbooks.

Language is everything.

Please, erase all your pre-conceived notions of what FOPC and the /RM/ is, and just read and think, as a child does when reading a new book.

You are used to having filth like Lady Gaga shoved down your throat as “beauty”. Vomit all that out, and get ready to meet Sophia Loren; Grace Kelly; Gina Lollobrigida.
<<<

> So, it's fair to say, e.g., that an
> EmployeeFullTime has a JobTitle and an EmployeeDate, and I can sensibly
> query for the job titles of all full time employees. JobTitle and
> EmployeeDate are descriptors of EmployeeFullTime.

Yes.
Not “fair to say”, but the Predicates declare it (given that you understood the above].

> But Person and Employee should not be perceived as a single logical
> unit: as per your discussion in §3, that would be incoherent.

Why ? How is it it incoherent ?

(The Basetype::Subtype relation must not be conflated with the Fact::OptionalAttribute, each has to be understood separately. And yet, of course, there are common elements. Which is why I labour to differentiate them, even in my simplest docs.)

Certainly not a *single* logical *unit*, as per when trying to understand Subtypes.
But in the ordinary logical sense of two sets connected by a Foreign Key.

A - Entity Level
Person is independent. A Person is optionally an Employee.

B - Attribute Level
Person is an Atom. The row contains its mandatory attributes. The optional row contains its optional attributes. Which in this case, renders it a new Fact: Employee. Person::Employee is a Molecule.

> So, does
> an Employee have a NameLast or a BirthDate?

Absolutely.
Where else, in heavens name, would an Employee get his NameLast or a BirthDate from.

> If so, what in the model
> does entitle me to state that?

The Predicates.

Again, a Binary Predicate can be read “two ways”. I will read it from the child to the parent.

1.
First, for clarity, this:
____Employee is identified by, and dependent on, 1 Person
combines two Binary Predicates that have the same parent and child, plus it Existential Predicate, the explicit Predicates are:
__1_Employee is identified by 1 Person
__E_Employee is dependent (whereas Person is independent)
__2_Employee is dependent on 1 Person

And yes, the Predicates can all be determined directly from the model.

2.
This is an error:
__Person is described by ( SexCode, Title )
It should be:
__Person is described by ( NameLast, NameFirst, Initial, BirthDate, BirthCountryCode, BirthPlace, Sex, Title )
But the error is harmless here.

3.
Now please stop and contemplate Codd’s 3NF, there is only Full Functional Dependency (the fragmentation of it by the pig poop eaters is condemned).
__Every domain (attribute) is functionally dependent on the Key; the whole Key; and nothing but the Key.

Key=Fact
Fact=Key

We can *count* on the fact that, from the child, any attribute in a parent hierarchy is 1::1.

4.
The answer. Since:
__Employee is 1 Person
____Person is described by ( NameLast, NameFirst, Initial, BirthDate, BirthCountryCode, BirthPlace, Sex, Title )

Therefore:
__Employee is described by Person( NameLast, NameFirst, Initial, BirthDate, BirthCountryCode, BirthPlace, Sex, Title )

__EmployeeFullTime is 1 Employee
____Employee is 1 Person
Therefore:
__EmployeeFullTime is described by Person( NameLast, NameFirst, Initial, BirthDate, BirthCountryCode, BirthPlace, Sex, Title )

Do *NOT* perceive Employee or EmployeeFullTime as a Fragment, isolated from its context.
Do perceive any entity in its full context only:
__Employee as Person-Employee;
__EmployeeFullTime as Person-Employee-EmployeeFullTime

Again, not as a single logical unit, but as ordinary logic, a chain of existential dependency.

5.
Consider the example Order Advanced DM
__ https://www.softwaregems.com.au/Documents/Documentary%20Examples/Order%20DM%20Advanced.pdf
the Country..Street hierarchy.

Why is Country.Fullname *NOT* 1::1 for each Suburb row ?

When does a Town row *EVER* have [exist in] an other-than-one Country; or State; etc ?

> For comparison, suppose that there is also a Hunting License entity, and
> a relationship between Person and Hunting License:
>
> Person has 0-to-1 Hunting License

Ok, let’s say:
- the PK is the same as Person (must needs be)
- the attributes are ( CountryCode, StateCode, HuntingLicenceNo, ExpiryDate, IsRevoked )

> Certainly, I would not say that a Hunting License has a NameLast or
> a BirthDate or that NameLast and BirthDate are descriptors of Hunting
> License.

Why not ?

(I am assuming you are not being silly about the wording, but chasing meaning; declarative meaning.)

A Hunting licence is not a piece of paper gambolling around in the Tyrolian Alps (Fragment).
__If it were a Fragment, sure, you should not say that
__But a HuntingLicence is not a Fragment

A Hunting licence is a /Power to Act/, that is given to a Person (not animal, not thing), it does not exist independently, it exists only in the context of Person.

Predicates, as modelled:
__HuntingLicence is dependent
__HuntingLicence is dependent on 1 Person
__HuntingLicence is identified by 1 Person
__HuntingLicence is 1 Person

Strictly, the entity is not HuntingLicence but Person-HuntingLicence.

Which is the real world, HuntingLicence is not a piece of paper (the Physical), but an option of Person (the Logical, the Real). (Or /a/ progression of Person in the fullness of definition of Person).

Why would a Person who is a Hunter (or Fisher or other) *NOT* have the attributes that a generic Person has ???

Why would a County that has a CountySportsField *NOT* have the State-attributes that all Counties in the State have ?

Since:
__HuntingLicence is 1 Person
____Person is described by ( NameLast, NameFirst, Initial, BirthDate, BirthCountryCode, BirthPlace, Sex, Title )

Therefore:
__HuntingLicence is described by Person( NameLast, NameFirst, Initial, BirthDate, BirthCountryCode, BirthPlace, Sex, Title )

----

Behold, the beauty of the chain of Predicates.

Impossible under the anti-Relational freak show promoted by the pig poop eaters, characterised by RecordIDs presented as “keys”, marketed as “relational”.

----

Trivially, a SELECT command is merely a test, of a chain of Predicates. One can add attributes obtained anywhere in the chain, and then it becomes a projection.

With *Relational Keys*, one obtains Access Path Independence. That means the chain does *NOT* have to be navigated link-by-link, as demanded with RecordID files.

> We can *count* on the fact that, from the child, any attribute in a parent hierarchy is 1::1.

T-SQL

SELECT Name, -- Country.Name
StateCode,
Town
FROM Town T
JOIN Country C ON
T.CountryCode = C.CountryCode
WHERE Town LIKE ( “Napoli%”, “Naples%” “Neapolit%” )

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 10, 2021, 5:51:27 AM6/10/21
to
Nicola

> On Thursday, 10 June 2021 at 19:25:02 UTC+10, Derek Ignatius Asirvadem wrote:
> > On Thursday, 10 June 2021 at 02:43:14 UTC+10, Nicola wrote:

1. Lost formatting, retry in a variable-width font, arrgh:

SELECT Name, -- Country.Name
________StateCode,
________Town
____FROM Town T
____JOIN Country C ON
________T.CountryCode = C.CountryCode
____WHERE Town LIKE ( “Napoli%”, “Naples%” “Neapolit%” )

2. I don't see how this is related to Incomplete Specialisation. It is a straight explanation of FOPC Predicates, in the context of the /RM/ and IDEF1X.

3. In general, I would stop using OO/OOP/ORM terms when dealing with data and databases, and specifically use Relationals terms when the database is Relational. Use of their terms implies their meaning, which is crippled, because OO is crippled (ambiguous; not a Standard; 420 notations; nothing clearly defined; etc). I deal with OO/OOP/ORM people all the time, and they appreciate that I do NOT use their terms, that I use Relational terms, which are rock solid. Just think about how stupid /Aggregation/ is, vs say /Composition/. Just like TTM, they have been arguing about what it is for THIRTY YEARS, and it is still not resolved. The freaks live in the Excluded Middle, that the sane reject.

Now that you have declared that the previous thread. is resolved, I do not understand why /Incomplete/ is still hanging around. Recall, there are two meanings: the original IDEF1X; and the common promoted by ERwin. The former being illegal.

4. Going over my previous post, it is plain to me that what is missing is the Predicate Lexicon. The fixed set of syntax for all the Predicates required for the implementation of a Relational database (subset of the entire FOPC possibilities). All the nuances are eliminated.

It is proprietary, published for paid customers only. Sorry.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 10, 2021, 7:12:28 AM6/10/21
to
> On Thursday, 10 June 2021 at 19:25:02 UTC+10, Derek Ignatius Asirvadem wrote:

Clarification.

> Thus any Binary Predicate can be read “two ways”, and only the *Verb*, not the full text, is required on the relation line, because each of the binaries is diagrammatically connected. Usually only the parent->child *Verb* is shown, and the converse is simple enough to determine [by reversal of the elements] by the reader.

Thus any Binary Predicate can be read “two ways”, and only the *Verb*, not the full text, is required on the relation line, because each of the Variables in the Binary Predicate is diagrammatically connected, and thus obvious.

Usually only the parent->child *Verb* is shown, and the converse is simple enough to determine [by reversal of the elements] by the reader.

Per IDEF1X syntax, the child-> parent *Verbiage* can be slash-added on a second text line on the relation line.

On Thursday, 10 June 2021 at 19:51:27 UTC+10, Derek Ignatius Asirvadem wrote:

> 3. In general, I would stop using OO/OOP/ORM terms when dealing with data and databases, and specifically use Relational terms when the database is Relational. Use of their terms implies their meaning, which is crippled, because OO is crippled (ambiguous; not a Standard; 420 notations; nothing clearly defined; etc). I deal with OO/OOP/ORM people all the time, and they appreciate that I do NOT use their terms, that I use Relational terms, which are rock solid. Just think about how stupid /Aggregation/ is, vs say /Composition/. Just like TTM, they have been arguing about what it is for THIRTY YEARS, and it is still not resolved. The freaks live in the Excluded Middle, that the sane reject.

I trust you understand and appreciate that the science of data and the science of process are distinct, not one science, not blurred. Thus it is a failure to take either one as primary (viewing the other as subordinate), or single (viewing the other as redundant because ours is oh so complete). The OO/OOP/ORM imbeciles do precisely that, more standing on the quicksand of Redundant, than the clever ones who have identified that the “mapping” is an undefined cloud and thus less, standing on the swamp of Primary.

The channels that transport blood are distinct from the channels that transport commands. Mixing the two, or failure to recognise that they are two, is self-cancelling. No progress can be had, despite endless arguments. Only drooling idiots, that is to say, the entire set of current academics in this field, minus you, would miss the fact that the Great Architect must have had a Good Reason for doing that. One need not mix the two, and then be surprised at the resulting paralysis or limb death.

Here at c.d.t, we are studying data science, with process science as secondary, a patron or sponsor. Not studying process science directly.

The single biggest mistake that the OO/OOP/ORM crowd make is this: they take the stand of Redundant (that their methods of perception are correct and complete, that their lens is perfect, and then they walk up to the universe and observe it.

>>>>
Read /Unskilled and Unaware/ by Krüger and Dunning 1999.
__ https://www.softwaregems.com.au/Documents/Reference/Unskilled%20%26%20Unaware%201999.pdf
<<<<

The Redundant stand is hysterically absurd. The Primary stand is less blind, but still crippled.

Outside the asylum, the concept of science is to observe the universe, and then to make hypotheses /from/ the observations.

Viewing the universe through the single lens of an OO Object is stupefying. Thirty years, and they still have not figured it out.

----

I won’t define it here, but at least to understand the first instance, the difference: the goals and thus the principles in data science, are distinctly different to that of process science. Either one must not view the other from its own lens, but from the [defined] other’s lens.

If you are at all unclear about this, please read this, with this conversation in mind. The OO/OOP/ORM world is in even worse shape now, than then:
__ https://www.softwaregems.com.au/Documents/Article/Application%20Architecture/UTOOS%20Response.pdf

> Now that you have declared that the previous thread. is resolved, I do not understand why /Incomplete/ is still hanging around. Recall, there are two meanings: the original IDEF1X; and the common promoted by ERwin. The former being illegal.

Thus the former is not usable.

And the latter is meaningless, because almost by definition, every set is /Incomplete/. Thus it too is not usable.

So the classification is irrelevant, we can dismiss it.

That is what happens when an academic writes the standard, it is mush. Bridges don’t fall down because it is the engineers, not the lab rats, who write the Standards.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 10, 2021, 10:47:04 AM6/10/21
to
> On Thursday, 10 June 2021 at 19:25:02 UTC+10, Derek Ignatius Asirvadem wrote:

> 2.
> This is an error:
> __Person is described by ( SexCode, Title )

Actually no. The Predicate Lexicon is Normalised, only non-key elements are Descriptors.

This is given for the simple understanding:
> __Person is described by ( NameLast, NameFirst, Initial, BirthDate, BirthCountryCode, BirthPlace, Sex, Title )

Updated.
__ https://www.softwaregems.com.au/Documents/Article/Database/Relational%20Model/IDEF1X/Nicola%20IDEF1X%202.pdf

> HunterLicence

The great thing about modelling rather than discussing. Particularly, the great thing about modelling visually, using IDEF1X, rather than hieroglyphics like /R={A, B, C}/, is that mistakes are exposed immediately. Likewise obvious improvements. When I added HunterLicence:
__Person is 0-or-1 HunterLicence
just does not go through the plumbing without event, it gurgles and farts. The table is renamed Hunter:
__Person is 0-or-1 Hunter
__Hunter is identified by, and dependent on, 1 Person
__Hunter is primarily identified by ( PersonNo )
__Hunter is alternately identified by ( CountryCode, StateCode, LicenceNo )
__Hunter is described by ( ExpiryDate, IsRevoked ) [1]

And ...
__Hunter is 1 Person
____Hunter is 1 Person ( <attribute>, ... )
Where <attribute> is Identifier or Descriptor.

Cheers
Derek

Nicola

unread,
Jun 11, 2021, 4:35:42 AM6/11/21
to
>> Certainly, I would not say that a Hunting License has a NameLast or
>> a BirthDate or that NameLast and BirthDate are descriptors of Hunting
>> License.

>Why not ?
>
>(I am assuming you are not being silly about the wording, but chasing meaning; declarative meaning.)
>
>A Hunting licence is not a piece of paper gambolling around in the Tyrolian Alps (Fragment).
>__If it were a Fragment, sure, you should not say that
>__But a HuntingLicence is not a Fragment
>
>A Hunting licence is a /Power to Act/, that is given to a Person (not animal, not thing), it does not exist
> independently, it exists only in the context of Person.
>
>Predicates, as modelled:
>__HuntingLicence is dependent
>__HuntingLicence is dependent on 1 Person
>__HuntingLicence is identified by 1 Person
>__HuntingLicence is 1 Person
>
>Strictly, the entity is not HuntingLicence but Person-HuntingLicence.
>
>Which is the real world, HuntingLicence is not a piece of paper (the Physical), but an option of Person (the Logical,
> the Real). (Or /a/ progression of Person in the fullness of definition of Person).

So, you are saying that a 0-to-1 relationship is essentially always
"is". The key in the parent and the key in the child is the same, so the
parent and the child must describe the same "molecule", as you call it.
Am I understanding right?

So, using another verb is just for convenience. Consider this, for
instance:

https://www.softwaregems.com.au/Documents/Article/Normalisation/Bridge.pdf

The relationship reads "Problem has a Composer_2". But, that really
means "Problem is a Problem-[With]-Composer_2", doesn't it? Hence,
Composer_2 is an optional attribute of a Problem, and Problem-Composer_2
is described by a Name and a Score, just as Problem is. Right?

Nicola

Nicola

unread,
Jun 11, 2021, 4:45:34 AM6/11/21
to
On 2021-06-11, Nicola <nic...@nohost.org> wrote:
> So, you are saying that a 0-to-1 relationship is essentially always
> "is".

Sorry, that should read: any identifying 0-to-1 relationship is always
"is".

Nicola

Derek Ignatius Asirvadem

unread,
Jun 11, 2021, 7:47:43 AM6/11/21
to
Nicola

> On Friday, 11 June 2021 at 18:35:42 UTC+10, Nicola wrote:

Whoa. You are responding to the first item that perturbed you. Please read again, there is a bigger item coming up, or stated otherwise, the item is bigger than you think.

Please read all my 4 posts, slowly, if I may request that. Think about it. And then you question will not be so narrow.

> So, you are saying

I am not saying anything. This is science, truth is permanent. I am articulating the Relational Model, the science, the knowledge (certainty), truth that does not change (as distinct from Modern “science” wherein “knowledge” is actually speculation, and “truth” is forever changing). As requested by various people, over the decades. It has nothing to do with me, it is not my opinion, I merely articulate Codd. At best, one might say, whereas the /RM/ is seminal, I deliver the genus and species from that semen. I do a poor job in this medium.

The /RM/ has not changed, the truths have not changed. The content is available, though under a mountain of filth. However, for the meaning; for the possibilities; the relevance in an implementation, you need one of the few guys like me, who call pig poop presented as “science” for what it is, and know the truths well enough to provide a definition.

> that a 0-to-1 relationship is essentially always "is”.

Not quite. Too loose. A bit more definition.

1. Think: Codd’s 3NF and Full Functional Dependency.

2. Not just a 1::0-to-1, but also 1::1 and 1::0-to-n. That is, all Subordinate relations.
It is not restricted to 1::0-or-1 relations.

3. The Verb on the relation line is specific, read in both directions.
The generic Verb for any child-to-parent relation is /is/.

4. It a loose way, not recommended for technical people, sure, a 1::0-to-1 relationship is essentially /is/, but needs to be made more specific.

5. It applies from the child to a parent at any level in the hierarchy.

> The key in the parent and the key in the child is the same, so the
parent and the child must describe the same "molecule", as you call it.
Am I understanding right?

Not quite.

Two separate concepts. Don’t conflate them.

1. One concept; the Atom. Purpose: deep understanding.

Fact = Key = Atom.

Person is the atom … the PK is the nucleus, the attributes are electrons.

Therefore, relative to Person, any Subordinate (eg. Employee; Hunter; etc) is a Molecule containing Person at the centre.

In Bridge DM:
Person is the atom
Person-Composer is a molecule. But a simple one, like [O2].
Person-Composer-Problem is a molecule.
Person-Composer-Problem-Composer_2 is a molecule, a rare case of the previous molecule.
Person-Composer-Problem-Solution is a molecule, the largest.

Problem is an Atom, every Fact is.
Problem-* is *NOT* a molecule.

Separately, do not ever split the atom.

2. Another concept. Child-to-parent-at-any-level. Purpose: straight /RM? & 3NF articulation.

Wherever there is an hierarchy, eg. Country to Street.
Thus far the relations are Identifying, but this applies to any parental relation, both Identifying and Non-Identifying.
But of course it is limited to a child that has the parent (any level) Key as a FK. In an Identifying hierarchy, the child has all the keys for its lineage; in a Non-Identifying hierarchy, the child has just the one key for its single-level parent.

Think Codd’s 3NF & FFD only.

<FK in Child> <Verb> 1 <Parent(PK)>

The proper <Verb> is specific.
The generic <Verb> is /is/.

> So, using another verb is just for convenience.

No.
The proper <Verb> is specific, and the relation is labelled as such.
The generic <Verb> is /is/.
It is not a matter of convenience.
In Logic, both Genus and Species have to be understood properly

One could also say, accurately, Person is the genus, Composer is the species.

--------

> The relationship reads "Problem has a Composer_2".

Yes.

> But, that really means "Problem is a Problem-[With]-Composer_2", doesn't it?

Yes.

> Hence, Composer_2 is an optional attribute of a Problem,

Yes.
Specifically, ComposerId_2 is an optional attribute of Problem.
Deployed to an optional table in order to prevent Null.
Some idiot calls this “6NF”, it is not even an NF.
Therefore Composer_2 is an Extension [formal Logic term] of Problem, or “in the Extension”.

In general, for all 1::0-or-1 cases, the chain is progression of some sort. Look at the pink, the horizontal progression in this DM:
__ https://www.softwaregems.com.au/Documents/Student_Resolutions_2020/Giannis/RoomMonitor%20DM%20V0_51.pdf

> and Problem-Composer_2 is described by a Name and a Score, just as Problem is. Right?

Yes.
Relatively, Problem is the Atom, Problem-Composer_2 is the Molecule.
Materially, in generic terms, Composer_2 /is/ 1 Problem.

And
Problem-Composer_2[ ComposerId ] is described by 1 ( FirstName, LastName, BirthDate, etc )

And
Problem-Composer_2[ ComposerId_2 ] is described by 1 ( FirstName, LastName, BirthDate, etc )

And
Solution is solved by 1 Person ( FirstName, LastName, BirthDate, etc )

It is about the Keys; Relational Keys; each component Key, and what that means about the attributes of a parent, at any level in the hierarchy, Identifying or not.

Now for relations with other cardinalities.
Each Solution is described by ( FirstName, LastName, etc )

Composer_2 is one Fact, and has 2 Keys (PK & FK). 3 Keys if you count components.
For Key ( ComposerId ), ComposerId is 1 ( FirstName, LastName, BirthDate, etc )
For Key ( ComposerId_2 ), ComposerId_2 is 1 ( FirstName, LastName, BirthDate, etc )
Note that that is Non-Identifying.
For Key ( ComposerId, ProblemNo ), ComposerId has 1 ( FirstName, LastName, BirthDate, etc )

--------

This:
> The key in the parent and the key in the child is the same,

has nothing to do with tis:
> so the parent and the child must describe the same "molecule"

The “key in the parent and the key in the child is the same” simply means a horizontal progression of the parent.

The parent and child are each atoms.
The parent+child is a molecule.

The parent-PK in the child as FK, which is child-to-parent 1::1, means the parent attributes are 1::1 for the child, thus the parent attributes are the child attributes.

--------

That Bridge problem is from my days of hard labour at TTM gulag. After three years of trying to teach the /RM/ to the freaks, I realised that they were purposely subverting the /RM/, and I gave up. But I did collect a lot of ammo. Whereas they are slaves of falsity; never resolved; ever-changing, I am a slave of truth, which is single and permanent.

Regards
Derek

Nicola

unread,
Jun 11, 2021, 2:48:27 PM6/11/21
to
On 2021-06-11, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
> Whoa. You are responding to the first item that perturbed you.
> Please read again, there is a bigger item coming up, or stated
> otherwise, the item is bigger than you think.

No, I have read all of your replies. Then, I have replied to the point
for which I wanted a clarification. This post of yours provides the
required clarification (and more). In particular:

> 3. The Verb on the relation line is specific, read in both directions.
> The generic Verb for any child-to-parent relation is /is/.

> Think Codd’s 3NF & FFD only.
>
><FK in Child> <Verb> 1 <Parent(PK)>
>
> The proper <Verb> is specific.
> The generic <Verb> is /is/.

Finally I understand the sense in which you use those verbs.

>> The key in the parent and the key in the child is the same, so the
> parent and the child must describe the same "molecule", as you call it.
> Am I understanding right?
>
> Not quite.
>
> Two separate concepts. Don’t conflate them.

Ok, from your extensive explanation, I've also got the meaning in which
you use the term "molecule".

> In an Identifying hierarchy, the child has all the keys for its
> lineage; in a Non-Identifying hierarchy, the child has just the one
> key for its single-level parent.

That's a prominent remark, as simple as it is. When you learn that, you
(a) understand why IDEF1X distinguishes between identifying and
non-identifying, and (b) you look at the diagrams in a novel way,
immediately perceiving the (join) relationships between entities that
are far apart. At least, this is what I felt when I first understood
that. I don't think any other notation lets you say: "I can (sensibly)
join that entity on the upper-left corner with that other entity in the
lower-right corner" so easily.

> That Bridge problem is from my days of hard labour at TTM gulag.

Not directly relevant to the topic of this thread, and possibly not
relevant to the formulation of that problem, but... your
Solver_Not_Composer constraint only ensures that the solver is not the
main composer. But nothing in that model prevents a solver to be the
same as the second composer (i.e., SolverID may be Composer_Id_2).

I would have designed the model differently:

Person[PersonId | LastName FirstName ...]
Problem[ComposerId.PersonId ProblemNo | Name Score ...]
ProblemStakeholder[ComposerId ProblemNo StakeholderId.PersonId Role]
JointComposer[ComposerId ProblemNo JointComposer.StakeholderId]
Solver[ComposerId ProblemNo Solver.StakeholderId]

where ProblemStakeholder is an exclusive generalization of JointComposer
and Solver. Then, a simple check constraint on ProblemStakeholder
(ComposerId ≠ StakeholderId) plus the exclusive subtyping would prevent
composers to be solvers of their own problems. An upper-bound on the
maximum number of composers or solvers for a problem could be added as
in your model.

Relationships:

Each Person invents 0-N Problems
Each Problem has 0-N ProblemStakeholders
Each ProblemStakeholder is one of JointComposer or Solver

Regards,
Nicola

Derek Ignatius Asirvadem

unread,
Jun 12, 2021, 12:08:25 AM6/12/21
to
> On Saturday, 12 June 2021 at 04:48:27 UTC+10, Nicola wrote:
> > On 2021-06-11, Derek Ignatius Asirvadem wrote:

> > That Bridge problem is from my days of hard labour at TTM gulag.
> Not directly relevant to the topic of this thread, and possibly not
> relevant to the formulation of that problem, but... your
> Solver_Not_Composer constraint only ensures that the solver is not the
> main composer. But nothing in that model prevents a solver to be the
> same as the second composer (i.e., SolverID may be Composer_Id_2).

Yes.
The requirement went back and forth for a while, as the freaks proposed (eg) that you could not do this or that in the /RM/, or that this or that was not possible in SQL, which I then gave the solution for. As usual, they left it, without finishing; without closure; without acknowledgement of the truth (neither the /RM/ nor SQL is limited). So the model has a weird set of constraints, per their last set of requirements, and it it not finished.

I have alluded to the straight-forward method in the notes.

> I would have designed the model differently:
>
> Person[PersonId | LastName FirstName ...]
> Problem[ComposerId.PersonId ProblemNo | Name Score ...]
> ProblemStakeholder[ComposerId ProblemNo StakeholderId.PersonId Role]
> JointComposer[ComposerId ProblemNo JointComposer.StakeholderId]
> Solver[ComposerId ProblemNo Solver.StakeholderId]

Ok.
(Minor point, I use round brackets for that, and square brackets only for the Key. Pipe means OR per various languages.)

>
> where ProblemStakeholder is an exclusive generalization of JointComposer
> and Solver. Then, a simple check constraint on ProblemStakeholder
> (ComposerId ≠ StakeholderId) plus the exclusive subtyping would prevent
> composers to be solvers of their own problems.

Very thoughtful.

The Check Constraint is fine, but it will not be simple. There are five conditions to be checked.

What in heavens name is “exclusive generalization”, do you have a definition or link ?

> An upper-bound on the
> maximum number of composers or solvers for a problem could be added as
> in your model.

It is already there:
- the cardinality in Composer_2 means max 1 JointComposer
- the Constraint /Solver_Max_4/ means exactly that.
The DDL link gives all DDL including the Functions called by the Constraints.

> Relationships:
>
> 1. Each Person invents 0-N Problems
> 2. Each Problem has 0-N ProblemStakeholders
> 3. Each ProblemStakeholder is one of JointComposer or Solver

Fine.
You need:
__ ProblemStakeholder needs a reference for the StakeholderId FK
__ 4. Each Person claims 0-N ProblemStakeholders

Erect the model and have a good look. Although it is fine, and it works, it is just not straight-forward (a desirable element in logic).

Rather than fixing-up the Bridge model, which had a specific argumentation purpose, I would erect a logically straight-forward model. Ends up with fewer Facts.

____ https://www.softwaregems.com.au/Documents/Article/Normalisation/Bridge%202.pdf

____________

I will get to the rest later.

Regards
Derek

Derek Ignatius Asirvadem

unread,
Jun 12, 2021, 3:07:38 AM6/12/21
to
Nicola

> On Saturday, 12 June 2021 at 04:48:27 UTC+10, Nicola wrote:
> > On 2021-06-11, Derek Ignatius Asirvadem wrote:
> > Whoa. You are responding to the first item that perturbed you.
> > Please read again, there is a bigger item coming up, or stated
> > otherwise, the item is bigger than you think.
>
> No, I have read all of your replies. Then, I have replied to the point
> for which I wanted a clarification. This post of yours provides the
> required clarification (and more).

The problem is, I don’t know what you don’t know. I find out by way of your questions, and only then, can I provide an answer.

Sometimes you have fixed ideas about something (here the meaning os a chain of solid lines; previously, that a Predicate is an un-integrated fragment), so it takes a bit of effort to get you past that, and into the new understanding.

Frankly, I had no idea that you did not know the relevance of Keys; component Keys; and solid/dashed lines. It happens occasionally with a bad developer, but with your stature and papers, I was sure you understood the /RM/ and its rendition IDEF1X.

> In particular:
>
> > 3. The Verb on the relation line is specific, read in both directions.
> > The generic Verb for any child-to-parent relation is /is/.
> > Think Codd’s 3NF & FFD only.
> >
> ><FK in Child> <Verb> 1 <Parent(PK)>
> >
> > The proper <Verb> is specific.
> > The generic <Verb> is /is/.
> Finally I understand the sense in which you use those verbs.

But is it not normal English ?
Is not FOPC an articulation of natural language (proper language, not the primitives that have no alphabet) ?
Referring to your paternal great-grandfather, let’s say his name is Vittorio Vitacolonna, why can’t I say /Nicola is a Vittorio/ ?
And if you really have his character, /Nicola is Vittorio/ ?
Let’s say your maternal grandfather is Corleone Volpe, why can’t I say /Nicola is a fox/ ?

Race; Language; Lineage; Hierarchy is everything. Modernism suppresses that, and turns everything into isolated fragments, which by definition, ha ha, are “equal”. Same as the evil of communism.

--------

> > > The key in the parent and the key in the child is the same, so the
> > parent and the child must describe the same "molecule", as you call it.
> > Am I understanding right?
> >
> > Not quite.
> >
> > Two separate concepts. Don’t conflate them.
>
> Ok, from your extensive explanation,

You are most welcome.

> I've also got the meaning in which
> you use the term "molecule".

Yeah. The concept of an Atom is mine, but I credit Codd for laying it out in his 3NF (refer all my previous comments referring to Codd’s 3NF). So it is just a Progression of the /RM/, by a faithful disciple, after many years of experience, rather than an addition.

Once you have the Atom, you can understand Molecules.

I did say, it was for deep understanding. I have not defined it fully here. I will give you one more increment.
- All tables/facts are Atoms
- Only a root Atom can be the start of a Molecule that is relevant (perceiving a Molecule in the middle of a tree is hopeless).
--- that means a Hierarch (independent, square corners)

--------

> > In an Identifying hierarchy, the child has all the keys for its
> > lineage; in a Non-Identifying hierarchy, the child has just the one
> > key for its single-level parent.
>
> That's a prominent remark, as simple as it is. When you learn that, you
> (a) understand why IDEF1X distinguishes between identifying and
> non-identifying, and (b) you look at the diagrams in a novel way,
> immediately perceiving the (join) relationships between entities that
> are far apart. At least, this is what I felt when I first understood
> that.

That is what I meant when I said /the item is bigger than you think/. I knew a big AHA was coming. But I was shocked that you did not know that, and surprised again, when several increments of explanation were required.

> That's a prominent remark, as simple as it is.

Please see if you can answer from your previous [before enlightenment] IDEF1X mindset: given that the components of the Key are above the line, and the solid lines define lineage, why was that not known ?

(It is not your fault. I suspect, it is the usual programming that passes for “education” these days: denial of the hierarchy; denial of the atom; treatment of fragments instead; denial of the Logical; focus on the Physical instead; implement a RecordID framework and shove the data into it; etc. Universities teach schizophrenic thinking.) You were quite able to perceive the connection between two proximal articles, but not two distal articles, even though the connection was via solid lines.

> When you learn that, you
> (a) understand why IDEF1X distinguishes between identifying and
> non-identifying,

That distinction is important, fundamental.

It is not IDEF1X making the distinction. It is the /RM/ that makes the distinction, IDEF1X just provides the notation for the /RM/ requirement. (Fifty years have passed and no academic or textbook author has articulated the /RM/. Here is another increment.) It is seminal, right. Here is one more genus and species. In §1.4 Normal Form:
- first, the requirement for Trees; Directed Acyclic Graphs; no circular references
- arrange the data into hierarchies
- straight from the Hierarchic Model
--- Fig 3(a) Unnormalised Set
--- without having to think very hard, or concoct a name, this is ==[ Hierarchic NF ]==
- the pre-requisite for ...

--- Fig 3(b) Normalised Set
--- without having to think very hard, or concoct a name, this is ==[ Relational Key NF ]==
- since the Key of the parent is (a) migrated to the child, and (b) forms the PK of the child, it is an *Identifying* relation.
- solid line

Ok, so what about simple look-up tables, etc, (a) the Keys of which are FKs in the child, (b) but not in the child PK. It is a *Non-Identifying* relation. It is no less a relation, but it is Non-Identifying. We don’t need an NF for this (if we did it might be *Reference NF*). But we do need to differentiate it from the Relational Keys, the Identifying relations.
- dashed line

____ FOPC[ Relation ]-->RM[ Identifying | Non-Identifying ]-->IDEF1X-->{ Solid | Dashed }

It was the same with Predicates in the previous thread. You knew this chain of dependency, but you did it lip service, you did not understand it deeply. If you remove FOPC, the /RM/ ceases to exist. Using the /RM/ without regard to Predicates is like building a house without a foundation, a house copied from another, that looked so wonderful precisely because it had a foundation.

____ FOPC[ Predicate ]-->RM[ PredicateTerse ]-->IDEF1X-->[ PredicateGraphic ]

Likewise, if you remove the /RM/, IDEF1X ceases to exist, it has no purpose other than graphic representation of FOPC, constrained by, or subsetted for, the /RM/. (Of course, with the exception of stupid things added by the author, which have to be removed in any case, as per that thread.)

>>>>
By the way, I have developed a tighter Standard, IDEF1R. Therein I have eliminated this problem of not comprehending this issue.
<<<<

--------

> I don't think any other notation lets you say: "I can (sensibly)
> join that entity on the upper-left corner with that other entity in the
> lower-right corner" so easily.

Yes. There is no other notation for the Relational Model, IDEF1X is the Standard. Now, you probably appreciate even more, how stupid ERD is modelling. Anyone using any other notation needs to be taken out the back and shot, before he infects anyone else. Anyone teaching ERD after 1987 needs to be incinerated. Even though I know it is being taught, in every university, I still can’t believe it.

Regards
Derek

Nicola

unread,
Jun 12, 2021, 4:22:59 AM6/12/21
to
On 2021-06-12, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
> What in heavens name is “exclusive generalization”, do you have a definition or link ?

Ok, read: exclusive basetype.

>> An upper-bound on the
>> maximum number of composers or solvers for a problem could be added as
>> in your model.
>
> It is already there:
> - the cardinality in Composer_2 means max 1 JointComposer
> - the Constraint /Solver_Max_4/ means exactly that.
> The DDL link gives all DDL including the Functions called by the Constraints.

Sure, I didn't mean that your model was lacking those constraints, just
that in my variant they could also be easily enforced.

>> Relationships:
>>
>> 1. Each Person invents 0-N Problems
>> 2. Each Problem has 0-N ProblemStakeholders
>> 3. Each ProblemStakeholder is one of JointComposer or Solver
>
> Fine.
> You need:
> __ ProblemStakeholder needs a reference for the StakeholderId FK
> __ 4. Each Person claims 0-N ProblemStakeholders
>
> Erect the model and have a good look. Although it is fine, and it
> works, it is just not straight-forward (a desirable element in logic).
>
> Rather than fixing-up the Bridge model, which had a specific
> argumentation purpose, I would erect a logically straight-forward
> model. Ends up with fewer Facts.
>
> ____ https://www.softwaregems.com.au/Documents/Article/Normalisation/Bridge%202.pdf

You may update the rendition of my version using exclusive subtyping,
because that is what I meant. Then, Solver_Not_Composer,
Solver_Not_JointComposer, and JointComposer_Not_Composer all reduce to
a single check in StakeHolder.

Regarding your version, I am not sure what Composer_Primary_Max_! and,
especially, Composer_Secondary_Max_! mean. I believe that the former
should enforce one ("max" = at most? Shouldn't it be "exactly"?)
primary composer for each problem.

In reply, I would change it as follows (using parentheses, and / as
a separator between the primary key and other attributes):

Person(PersonId / LastName ...)
Problem(ProblemName / Score ...)
ProblemStakeholder(PersonId ProblemName / Role)
Composer(Composer.PersonId ProblemName)
Solver(Solver.PersonId ProblemName / Solution Date)

Relationships:

Each Person acts as 0-N ProblemStakeholders
Each Problem is built around 1-N ProblemStakeholders
Each ProblemStakeholder is an excl. basetype, one of {Composer, Solver}

Additional constraints:

1. A problem must have at least one [exactly one] primary composer.
2. A problem can have at most M composers.
3. A problem can have at most N solvers.

Here, I am assuming that Role ranges over {primary,secondary,solver}.

This is slightly less straightforward, but it would allow me to record
additional information about problem stakeholders regardless of their
role.

Nicola

Nicola

unread,
Jun 12, 2021, 5:04:17 AM6/12/21
to
On 2021-06-12, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> > 3. The Verb on the relation line is specific, read in both directions.
>> > The generic Verb for any child-to-parent relation is /is/.
>> > Think Codd’s 3NF & FFD only.
>> >
>> ><FK in Child> <Verb> 1 <Parent(PK)>
>> >
>> > The proper <Verb> is specific.
>> > The generic <Verb> is /is/.
>> Finally I understand the sense in which you use those verbs.
>
> But is it not normal English ?

As a non-native speaker, there will always be nuances of the English
language I won't grasp.

>I will give you one more increment.
> - All tables/facts are Atoms
> - Only a root Atom can be the start of a Molecule that is relevant (perceiving a Molecule in the middle of a tree is hopeless).
> --- that means a Hierarch (independent, square corners)

That's clear.

> --------
>
>> > In an Identifying hierarchy, the child has all the keys for its
>> > lineage; in a Non-Identifying hierarchy, the child has just the one
>> > key for its single-level parent.
>>
>> That's a prominent remark, as simple as it is. When you learn that, you
>> (a) understand why IDEF1X distinguishes between identifying and
>> non-identifying, and (b) you look at the diagrams in a novel way,
>> immediately perceiving the (join) relationships between entities that
>> are far apart. At least, this is what I felt when I first understood
>> that.
>
> That is what I meant when I said /the item is bigger than you think/.
> I knew a big AHA was coming. But I was shocked that you did not know
> that, and surprised again, when several increments of explanation were
> required.

Well, it's not that I have understood that yesterday. But your comment
has reminded me of that aha! moment I indeed experienced.

> By the way, I have developed a tighter Standard, IDEF1R. Therein
> I have eliminated this problem of not comprehending this issue.

Perhaps, some day the world will be enlightened then? Or will your
legacy remain a privilege for few?

Regards,
Nicola

Derek Ignatius Asirvadem

unread,
Jun 12, 2021, 7:29:00 AM6/12/21
to
> (Modelling instructions)

Done.

Doc updated:
____ https://www.softwaregems.com.au/Documents/Article/Normalisation/Bridge%202.pdf

At this stage we are getting into implementation standards, such as naming. Which is perfectly fine and ordinary. But then an explanation is required. No time for that now, sorry. I trust you will see some benefit, from the usage sans explanation. Consideration: finding an object among hundreds of its type, not just the 5 to 10 objects in the subject area..

In any case, it is your model, therefore my wishes have no say.

I have used the IDEF1R symbol for ExclusiveSubtype, with the IDEF1X in the left margin for comparison. Please choose.

The rest, later.

Cheers
Derek

Nicola

unread,
Jun 12, 2021, 9:12:07 AM6/12/21
to
On 2021-06-12, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> (Modelling instructions)
>
> Done.
>
> Doc updated:
> ____ https://www.softwaregems.com.au/Documents/Article/Normalisation/Bridge%202.pdf

Solver_Not_Composer is implied by *_IsExclusive.

What issues do you see with a discriminator assigning more than one
value to a subtype?

primary or secondary -> Composer
solver -> Solver

Also, I don't understand this sentence:

The cumbersome <Role.PK> is replaced with <Role>.

I haven't put Role in the primary key.

> I have used the IDEF1R symbol for ExclusiveSubtype, with the IDEF1X in
> the left margin for comparison. Please choose.

Your symbol is fine.

Nicola

Nicola

unread,
Jun 12, 2021, 9:45:52 AM6/12/21
to
On 2021-06-12, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> (Modelling instructions)
>
> Done.
>
> Doc updated:
> ____ https://www.softwaregems.com.au/Documents/Article/Normalisation/Bridge%202.pdf

One more thing. Your constraints are implemented as CONSTRAINT.. CHECK
clauses. Those are checked before every insert or update. A constraint
such Solver_Max_4 would correctly prevent you from inserting a fifth
solver. But it would also prevent you to update an existing solver when
there are already four of them.

Triggers would provide a trivial solution for that, but AFAIK you don't
use them (and I agree with you on avoiding them as much as possible).
So, how do you deal with that? Delete plus insert, instead of update?

Nicola

Derek Ignatius Asirvadem

unread,
Jun 12, 2021, 9:48:12 PM6/12/21
to
> On Saturday, 12 June 2021 at 19:04:17 UTC+10, Nicola wrote:
> > On 2021-06-12, Derek Ignatius Asirvadem wrote:
> >> > 3. The Verb on the relation line is specific, read in both directions.
> >> > The generic Verb for any child-to-parent relation is /is/.
> >> > Think Codd’s 3NF & FFD only.
> >> >
> >> ><FK in Child> <Verb> 1 <Parent(PK)>
> >> >
> >> > The proper <Verb> is specific.
> >> > The generic <Verb> is /is/.
> >> Finally I understand the sense in which you use those verbs.
> >
> > But is it not normal English ?
>
> As a non-native speaker, there will always be nuances of the English
> language I won't grasp.

Understood. But I must say, your English is so good, that I forget that it is not your mother tongue.

--------

> On Saturday, 12 June 2021 at 17:07:38 UTC+10, Derek Ignatius Asirvadem wrote:
>
> [... I trust you understand and appreciate ...]
>
> ____ FOPC[ Relation ]-->RM[ Identifying | Non-Identifying ]-->IDEF1X-->{ Solid | Dashed }
>
> ____ FOPC[ Predicate ]-->RM[ PredicateTerse ]-->IDEF1X-->[ PredicateGraphic ]

Therefore:
the entire IDEF1X data model is a Logic Map,
__an FOPC Logic Map
____of Predicates
______(Constrained to a subset of FOPC for /RM/, a Lexicon)
____Trees as per /RM/ §1.4
______one Tree for each of the very few Hierarchs

--------

> On Saturday, 12 June 2021 at 23:12:07 UTC+10, Nicola wrote:
> > On 2021-06-12, Derek Ignatius Asirvadem wrote:
>
> What issues do you see with a discriminator assigning more than one
> value to a subtype?

I think you mean /What issues do you see with a discriminator being overloaded (more than the range of subtype Discriminators) ?/

The issue here is nothing to do with Discriminators or Subtypes. It has to do with overloading.

You guys are (a) so died-in-the-wool attached to the Physical, and (b) totally ignorant of the Logical, that the whole world of the Logical is lost. To obtain perspective, the entire discussion in this thread re lineage in an hierarchy is Logical, but it is not obvious to people who are trained to think in the Physical, one paltry link at a time.

First Normal Form

__ The domain is atomic (indivisible) wrt to the platform.

take literally and physically means one thing. One thinks of proper Datatypes, etc. Good. Some will see that as eliminating Repeating Values, which of course it does but which is incorrect because that is explicitly handled in 2NF. Others will realise this means made up attributes (such as a Part Code AAA999AA) are to be recognised and treated as discrete attributes (AAA; 999; AA). Good. The better modellers will realise that means a MonthNo must not be stored as INT but as a component of DATE Datatype (which automatically limits the range to 1..12). Good. But that is not complete.

The Logical. That harks back to an ancient rule about the Logical, which pertains to engineering of any kind, and also pertains to the Law of Identity (first in the Four Laws of Thought):
__A thing shall contain only what it is named (labelled) to contain.

Stated otherwise, never make a domain do more than one thing. (It is a different point that the components of a Relational Keys do more than one thing, because that is by design, not by accident or defect).

Stated otherwise, overlaps of a domain are absolutely prohibited, make two domains.

Stated otherwise, never implement more than one ENUM category in an ENUM table.

If you break this rule, you will have difficulty in (a) expanding the database [eg. adding a Critic Subtype, or Critic Role-not-Subtype], and in code. Maintenance problem: future changes demand revisiting prior code. Whereas if you maintain this rule, all such difficulties are eliminated.

Discretely, then:
__ the Discriminator (demanded by /RM/ and IDEF1X) StakeType is { Composer | Solver }, only.
__ the ComposerType is { Primary | Secondary}, only.

Now adding a Critic Subtype, or a Critic ComposerType, is effortless, because all existing DDL and code is unaffected.

> primary or secondary -> Composer
> solver -> Solver

You have an unnormalised range-of-values [wannbe Domain] { Primary | Secondary | Solver }, which covers two established Domains (Discriminator, ComposerType), but you have not perceived that as such because you have the notion that Composers and Solvers are aggregated into Stakes. That lack of discrimination at the Stake level leads to lack of discrimination at the Role level, and the fact that now Role breaks [Logical] 1NF.

This is a common reductionist error. See Maslow on Reduction ~1930.

So in the previous model 12.2 Jun 21, I as DBA policeman re what goes into the db, allowed your Role, but insisted on the required Discriminator Staketype. In the next iteration, I will give ComposerType. (This is modelling, yes, we have many iterations). You tell me what you want.

--------

> Also, I don't understand this sentence:
>
> > The cumbersome <Role.PK> is replaced with <Role>.
>
> I haven't put Role in the primary key.

Not the column name Role, but the /RM/ concept of Role.

In this, you have:
>>> Composer(Composer.PersonId ProblemName)

Sorry for being terse. Role is a qualifier of the PK. I assumed you were following Codd (correct), and IDEF1X notation (correct but cumbersome). Erwin does that automatically.

But /Composer.PersonId/ cannot be implemented as a column name, the column name would be /ComposerId/ of Datatype /PersonId/. So it is silly to have that in the Physical, but not in the Logical. Therefore the theoretically proper /Composer.PersonId/ is replaced with the full Role /ComposerId/ in the Logical as well.

--------

> On Saturday, 12 June 2021 at 18:22:59 UTC+10, Nicola wrote:
>
> Then, Solver_Not_Composer, Solver_Not_JointComposer, and JointComposer_Not_Composer all reduce to a single check in StakeHolder.

Implementation standards again.
Mine require each CHECK constraint to do just one logical thing, generally that means one <LogicalCondition> per constraint. It may have more that one Logic clause, but not more than one Logic condition, eg. my Composer_Secondary_Max_1 constraint. That allows the DBA to turn off/on single <LogicConditions>, whereas the single CHECK results in all <LogicConditions> being turned off/on, or worse, having to write a new CHECK constraint just to get the maintenance task done.

While I am here,let me point out something about naming. For meaning and brevity, the constraint name is:
__ Secondary_Max_1
The table prefix is excluded in the model at the table entry, but demanded in the physical (otherwise you will never find the constraint in the list of constraints), and is so listed at the expansion at the bottom:
__ Composer_Secondary_Max_1
Along with the full CHECK condition given in pseudo-sql (my notation, which you have seen before):
__ CHECK Composer[ @Primary ] EXISTS
__ AND
__ CHECK Composer[ @Secondary ] NOT EXISTS

See, even typing that exposes that it is incorrect. Ah, the wonder of the iterations in data modelling.
__ CHECK Composer.IsPrimary[ @ProblemName ] EXISTS
__ AND
__ CHECK ( Composer.IsPrimary[ @ProblemName ] = 0 ) NOT EXISTS

Whereas with the single CHECK both the name is confused, and the multiple conditions are not readily visible. Note especially the error message that your platform or suite-of-programs throws, which informs you such that you don’t have to go elsewhere and scan DDL, and the correction is immediately obvious:
__ Error 23000 ... CHECK Constraint Constraint Composer_Secondary_Max_1 violation occurred
vs
__ Error 23000 ... CHECK Constraint Constraint Composer_AllInOne violation occurred

--------

For a bit of a write-up on naming, see this post. SO actively suppresses the truth, same as the /RM/ saboteurs:
__ https://stackoverflow.com/a/4703155/484814

--------

> On Saturday, 12 June 2021 at 23:45:52 UTC+10, Nicola wrote:
> > On 2021-06-12, Derek Ignatius Asirvadem wrote:
> >
> One more thing. Your constraints are implemented as CONSTRAINT.. CHECK
> clauses. Those are checked before every insert or update. A constraint
> such Solver_Max_4 would correctly prevent you from inserting a fifth
> solver. But it would also prevent you to update an existing solver when
> there are already four of them.
>
> Triggers would provide a trivial solution for that, but AFAIK you don't
> use them (and I agree with you on avoiding them as much as possible).

Absolutely.
I have never written a trigger in my life.
I have examined and replaced over 2,000 triggers in customer databases with proper DDL and Transactions.

> So, how do you deal with that? Delete plus insert, instead of update?

That is a question that should be answered fully. Considerations.

1. Each platform is slightly different.
Even the rules for coding a Function are different. Eg. mine allows a single RETURN, MS/SQL allows multiple RETURNS, which means poor coding is allowed.

2. Non-platforms (suites of hundreds of programs pretending to be a server) are vastly different, not even SQL compliant. So they have additional features that are not SQL, and not necessary in SQL. 100% rewrite when you move to SQL.

3. SQL has been changed to allow CHANGE TO DDL from within a Transaction. This is absolutely prohibited.

4. Not on my platform.
The CHECK constraint is fired for UPDATE only if a column named in the constraint is affected.
__ In this case, no columns are named, meaning it is not CHECKed on UPDATE
__ And I can force the constraint to be CHECKed by naming a column
(There is a huge value in commercial SQL platforms, that the freeware crowd cannot even dream of.)

Example from MS/SQL, which is the same as mine in this issue:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/cdb87a2e-41ce-4a97-8a14-3f218eb00721/function-check-constraint-ignored?forum=transactsql

5. For others.
In general, yes, DELETE+INSERT.
Note that one should be used to that because UPDATE any Key component is not permitted, it must be DELETE+INSERT, in a Transaction of course, that moves the entire hierarchy belonging to the Key.
Every Transaction must validate its attempt before execution, so it is not adding anything substantial (a code block), just a couple of lines.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 13, 2021, 8:08:07 AM6/13/21
to
> On Sunday, 13 June 2021 at 11:48:12 UTC+10, Derek Ignatius Asirvadem wrote:
>
> In the next iteration, I will give ComposerType. (This is modelling, yes, we have many iterations). You tell me what you want.

> [etc]

I forgot to mention, of course the documents is updated.

__ https://www.softwaregems.com.au/Documents/Article/Normalisation/Bridge%202.pdf

Regards
Derek

Nicola

unread,
Jun 13, 2021, 11:13:54 AM6/13/21
to
On 2021-06-13, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> On Saturday, 12 June 2021 at 23:12:07 UTC+10, Nicola wrote:
>> > On 2021-06-12, Derek Ignatius Asirvadem wrote:
>>
>> What issues do you see with a discriminator assigning more than one
>> value to a subtype?
>
> I think you mean /What issues do you see with a discriminator being
> overloaded (more than the range of subtype Discriminators) ?/
>
> The issue here is nothing to do with Discriminators or Subtypes. It
> has to do with overloading.

All good points, thanks.

> So in the previous model 12.2 Jun 21, I as DBA policeman re what goes
> into the db, allowed your Role, but insisted on the required
> Discriminator Staketype. In the next iteration, I will give
> ComposerType. (This is modelling, yes, we have many iterations). You
> tell me what you want.

The latest revision, with ComposerType, is fine.

>> Also, I don't understand this sentence:
>>
>> > The cumbersome <Role.PK> is replaced with <Role>.
>>
>> I haven't put Role in the primary key.
>
> Not the column name Role, but the /RM/ concept of Role.

Ah! Got confused by the names. Sure, I have used <Role.PK> for
extra-clarity of the text-only description.

>> On Saturday, 12 June 2021 at 18:22:59 UTC+10, Nicola wrote:
>>
>> Then, Solver_Not_Composer, Solver_Not_JointComposer, and
>> JointComposer_Not_Composer all reduce to a single check in
>> StakeHolder.
>
> Implementation standards again.
> Mine require each CHECK constraint to do just one logical thing,
> generally that means one <LogicalCondition> per constraint.

Ok, makes sense. Again, good points.

> While I am here,let me point out something about naming. For meaning
> and brevity, the constraint name is:
> __ Secondary_Max_1
> The table prefix is excluded in the model at the table entry, but
> demanded in the physical (otherwise you will never find the constraint
> in the list of constraints), and is so listed at the expansion at the
> bottom:
> __ Composer_Secondary_Max_1
> Along with the full CHECK condition given in pseudo-sql (my notation,
> which you have seen before):
> __ CHECK Composer[ @Primary ] EXISTS
> __ AND
> __ CHECK Composer[ @Secondary ] NOT EXISTS

Ok. The notation is pretty much self-explaining.

> See, even typing that exposes that it is incorrect. Ah, the wonder of
> the iterations in data modelling.
> __ CHECK Composer.IsPrimary[ @ProblemName ] EXISTS
> __ AND
> __ CHECK ( Composer.IsPrimary[ @ProblemName ] = 0 ) NOT EXISTS

I think that you still have a typo there. The second clause should read:

[AND] CHECK ( Composer.~~~IsSecondary~~~[ @ProblemName ] = 0 ) NOT EXISTS

> For a bit of a write-up on naming, see this post. SO actively
> suppresses the truth, same as the /RM/ saboteurs:
> __ https://stackoverflow.com/a/4703155/484814

I know that post. I found it while searching for naming standards some
time ago. Good advice, generally speaking, although I do not follow all
of it. For instance, I find the recommendation for large subject areas
(use table prefixes) rather odd, when logical separation is precisely
the purpose of schemas (CREATE SCHEMA). The advice is good for platforms
not supporting schemas, such as MySQL, though.

>> On Saturday, 12 June 2021 at 23:45:52 UTC+10, Nicola wrote:
>> > On 2021-06-12, Derek Ignatius Asirvadem wrote:
>> >
>> One more thing. Your constraints are implemented as CONSTRAINT.. CHECK
>> clauses. Those are checked before every insert or update. A constraint
>> such Solver_Max_4 would correctly prevent you from inserting a fifth
>> solver. But it would also prevent you to update an existing solver when
>> there are already four of them.
>>
>> Triggers would provide a trivial solution for that, but AFAIK you don't
>> use them (and I agree with you on avoiding them as much as possible).
>
> Absolutely.
> I have never written a trigger in my life.
> I have examined and replaced over 2,000 triggers in customer databases
> with proper DDL and Transactions.
>
>> So, how do you deal with that? Delete plus insert, instead of update?
>
> That is a question that should be answered fully. Considerations.

Ok, understood. Nothing to add here.

> 4. Not on my platform.
> The CHECK constraint is fired for UPDATE only if a column named in the
> constraint is affected. __ In this case, no columns are named, meaning
> it is not CHECKed on UPDATE __ And I can force the constraint to be
> CHECKed by naming a column (There is a huge value in commercial SQL
> platforms, that the freeware crowd cannot even dream of.)

Ok, I didn't know that.
Funny example. The OP's constraint wouldn't work even if the constraints
were triggered, because their condition is checked *before* the update
is performed. So, when changing IsActive from true to false in the
parent, or when changing MId to point to a record in M with IsActive set
to false, the corresponding constraint must evaluate to true (otherwise
the data would not be consistent to begin with), so such updates are
allowed.

I do not see how you could prevent them with CONSTRAINT.. CHECK.
Ironically (since you posted that), the first reply in that thread
recommends: "using scalar UDFs that read the database in a CHECK
constraint is not a good practice. Triggers are simpler and more
useful". Indeed, that example would be solved easily with triggers.

Alternatively, you'd have to revoke writes and use stored procedures to
perform semantically meaningful operations. Yes, I know you do that.

I do not see other ways.

> 5. For others.
> In general, yes, DELETE+INSERT.
> Note that one should be used to that because UPDATE any Key component
> is not permitted, it must be DELETE+INSERT, in a Transaction of
> course, that moves the entire hierarchy belonging to the Key.

That is because in your experience, cascading updates (UPDATE ...
CASCADE), are inefficient, right? I'd like to do some benchmarks. So,
how do you code them?

start transaction;
update Grand1_Grand2...GrandN_Child;
update Grand1_Grand2...Grand(N-1)_Child
...
update Child;
update Parent;
commit;

?

>I forgot to mention, of course the documents is updated.

Looks good to me (except for the above mentioned typo).

Nicola

Nicola

unread,
Jun 13, 2021, 3:30:34 PM6/13/21
to
On 2021-06-13, Nicola <nic...@nohost.org> wrote:
>> Example from MS/SQL, which is the same as mine in this issue:
>> https://social.msdn.microsoft.com/Forums/sqlserver/en-US/cdb87a2e-41ce-4a97-8a14-3f218eb00721/function-check-constraint-ignored?forum=transactsql
>
> Funny example. The OP's constraint wouldn't work even if the constraints
> were triggered,
> [...]
> I do not see how you could prevent them with CONSTRAINT.. CHECK.
> [...]
> I do not see other ways.

No no, of course it can be done with constraints. I got confused by the
fact that the OP defines one function to implement different
constraints.

A good example of issues arising from conflating different conditions.

Nicola

Derek Ignatius Asirvadem

unread,
Jun 13, 2021, 10:44:02 PM6/13/21
to
> On Monday, 14 June 2021 at 01:13:54 UTC+10, Nicola wrote:
> > On 2021-06-13, Derek Ignatius Asirvadem wrote:
> >> On Saturday, 12 June 2021 at 23:12:07 UTC+10, Nicola wrote:

> > See, even typing that exposes that it is incorrect. Ah, the wonder of
> > the iterations in data modelling.
> > __ CHECK Composer.IsPrimary[ @ProblemName ] EXISTS
> > __ AND
> > __ CHECK ( Composer.IsPrimary[ @ProblemName ] = 0 ) NOT EXISTS
>
> I think that you still have a typo there. The second clause should read:
>
> [AND] CHECK ( Composer.~~~IsSecondary~~~[ @ProblemName ] = 0 ) NOT EXISTS
>

The short form notation (LHS only) is:

__ {+-=?} <Table>[ .<Column> ] [ PK [<AK>[ AK ] ] ]

(More can be doen with RHS.)

Where:
__ + means INSERT
__ - means DELETE
__ = means UPDATE
__ ? means SELECT, which is obvious, can be skipped, esp. if the clause contains { [NOT] EXISTS }
__ brackets of course

CHECK is BIT/BOOLEAN. So at best, a Function called be a CHECK should be BIT/BOOLEAN.

When a @ProblemName row exists, and IsPrimary is zero, it is a Secondary.

So this:
__ CHECK ( Composer.IsPrimary[ @ProblemName ] = 0 ) NOT EXISTS
checks that a Secondary does not exist.

> > For a bit of a write-up on naming, see this post. SO actively
> > suppresses the truth, same as the /RM/ saboteurs:
> > __ https://stackoverflow.com/a/4703155/484814
>
> I know that post. I found it while searching for naming standards some
> time ago. Good advice, generally speaking, although I do not follow all
> of it. For instance, I find the recommendation for large subject areas
> (use table prefixes) rather odd, when logical separation is precisely
> the purpose of schemas (CREATE SCHEMA). The advice is good for platforms
> not supporting schemas, such as MySQL, though.

SQL did not have SCHEMA to begin with, and most of us don’t use it: I don’t know a single DBA who does. The term SubjectArea was introduced with ERwin, and it stuck (Codd uses a similar term in one of his other papers). SCHEMA is just too non-specific, too ambiguous.

We use DBArtisan (you will baulk at the price) almost exclusively for DBA work, on databases that have hundreds of objects of each object type, so finding things is effortless. But we have to be able to access and use the SQL Platform directly, via the character interfac, which is [isql], for all server-based work (batch and background processes) and in emergencies (recovery from some failure).

Obviously, use either SubjectAreaPrefix xor SCHEMA.

> >> On Saturday, 12 June 2021 at 23:45:52 UTC+10, Nicola wrote:
> >> > On 2021-06-12, Derek Ignatius Asirvadem wrote:
> >> >
> >> One more thing. Your constraints are implemented as CONSTRAINT.. CHECK
> >> clauses. Those are checked before every insert or update. A constraint
> >> such Solver_Max_4 would correctly prevent you from inserting a fifth
> >> solver. But it would also prevent you to update an existing solver when
> >> there are already four of them.
> >
> > 4. Not on my platform.
> > The CHECK constraint is fired for UPDATE only if a column named in the
> > constraint is affected.
> > __ In this case, no columns are named, meaning it is not CHECKed on UPDATE
> > __ And I can force the constraint to be CHECKed by naming a column
> > (There is a huge value in commercial SQL platforms, that the freeware crowd cannot even dream of.)
>
> Ok, I didn't know that.

> > Example from MS/SQL, which is the same as mine in this issue:
> > https://social.msdn.microsoft.com/Forums/sqlserver/en-US/cdb87a2e-41ce-4a97-8a14-3f218eb00721/function-check-constraint-ignored?forum=transactsql
>
> Funny example. The OP's constraint wouldn't work even if the constraints
> were triggered ...

I gave the reference to demonstrate that [4] exists in the commercial SQL Platforms, to save typing.

Yes, the types that populate the MS/SQL world are stupid. Second only to the types that populate the freeware world (they think their non-sql is SQL, and then they say, SQL is broken).

> Ironically (since you posted that), the first reply in that thread
> recommends: "using scalar UDFs that read the database in a CHECK
> constraint is not a good practice. Triggers are simpler and more
> useful". Indeed, that example would be solved easily with triggers.

1. Triggers are simply stupid, maintenance is horrendous. Total unnecessary in the real world of Open Architecture, the original Client/Server, where direct writes are not GRANTed, and all updates are via ACID Transactions (stored procs) only. So whether something or other can be solved by a Trigger is irrelevant.
__ https://www.softwaregems.com.au/Documents/Article/Application%20Architecture/Open%20Architecture.pdf

2.
> > CHECK is BIT/BOOLEAN. So at best, a Function called be a CHECK should be BIT/BOOLEAN.

Using Functions that are called by CHECK Constraints is pedestrian, the world has been doing it since 2007. CHECK can only check the inserted row. CHECK+Function can check other rows in the table, or any other table.

3. The first-reply guy is an idiot who does not understand [1][2]. Don’t go and sit beside him.

> Alternatively, you'd have to revoke writes and use stored procedures to
> perform semantically meaningful operations.

Not “revoked”.
Permissions have to be explicitly GRANTed. Just don’t GRANT them.

> Yes, I know you do that.

Everyone in the big iron end of town.
Since 1960, without a break, regardless of platform.

The legs-open-but-don’t-touch-me attitude is the new generation. We just reject it. It is for them (who prosecute the new proposition, the insanity) to justify it. It is not for us to justify rejection of it.

> I do not see other ways.
> > 5. For others.
> > In general, yes, DELETE+INSERT.
> >
> > Note that one should be used to that because UPDATE any Key component
> > is not permitted, it must be DELETE+INSERT, in a Transaction of
> > course, that moves the entire hierarchy belonging to the Key.
>
> That is because in your experience, cascading updates (UPDATE ...
> CASCADE), are inefficient, right?

That starts a whole new subject. I will open a new thread, if you don’t mind.

--------

> On Monday, 14 June 2021 at 05:30:34 UTC+10, Nicola wrote:
I do not assert my Software Gems Standards willy nilly, they are based on decades of experience. We eliminate that which can be eliminated, prevent that which can be prevented, and reduce the remainder, the goal is uneventful, undramatic life in production systems. Both his confusion, and yours (think: you are a maintainer, trying to maintain his code) could be eliminated. Single <LogicalCondition> per Constraint is a Good Thing. Not even comprehensible until you take implementation concerns into account, and have some experience with the effects of poor code.

I will give you another. The most common error in SQL code, and the hardest to detect, is datatype mismatch. Actually, the error messages are numerous, depending on where it occurred. It is also the easiest to So for me, in 1987, I took notice that God written on a stone tablet:

__ THOU SHALT NOT USE RAW DATATYPES

the corollary to which is:

__ THOU SHALT ONLY USE USER DATATYPES

which means, they have to be thought out carefully, and created.

To me, it is an NF, between 1NF and 2NF.

I have never suffered a datatype mismatch in my life, and if any developer whom I have educated ever suffers one, I break his fingers. They are put on notice during the education, hell awaits those deny God.

Cheers
Derek

Nicola

unread,
Jun 14, 2021, 3:03:04 AM6/14/21
to
On 2021-06-14, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> I think that you still have a typo there. The second clause should read:
>>
>> [AND] CHECK ( Composer.~~~IsSecondary~~~[ @ProblemName ] = 0 ) NOT EXISTS
> [...]
> So this:
> __ CHECK ( Composer.IsPrimary[ @ProblemName ] = 0 ) NOT EXISTS
> checks that a Secondary does not exist.

I stand corrected.
>> Ironically (since you posted that), the first reply in that thread
>> recommends: "using scalar UDFs that read the database in a CHECK
>> constraint is not a good practice. Triggers are simpler and more
>> useful". Indeed, that example would be solved easily with triggers.
>
> 1. Triggers are simply stupid, maintenance is horrendous. Total
> unnecessary in the real world of Open Architecture,
> [...]
> Using Functions that are called by CHECK Constraints is pedestrian,
> the world has been doing it since 2007. CHECK can only check the
> inserted row. CHECK+Function can check other rows in the table, or any
> other table.

For platforms where that is true, I totally agree.

I can code that way in PostgreSQL, and it is an elegant approach. It
even appears to work in simple tests, which is nice because I can try
that approach. Unfortunately, it is not really supported (it fails under
concurrent loads—have fun with this comment), so triggers are the only
alternative in most cases. But you get what you pay for, right?

On the other hand, a *disciplined* use of triggers, restricted to such
constraints, is fine. It achieves the same results as declarative
constraints, with some added flexibility and efficiency, because you can
specify exactly which operations on which columns under which conditions
should fire the trigger. Error reporting is just as expressive, or more
(custom messages). The trigger shows up just as a constraint associated
to a table when you inspect the table metadata. Just slightly more
verbose to write.

Same as using prefixes because you have no schemas. Not ideal, but still
workable.

Nicola

Derek Ignatius Asirvadem

unread,
Jun 14, 2021, 6:12:06 AM6/14/21
to
> On Monday, 14 June 2021 at 17:03:04 UTC+10, Nicola wrote:
> > On 2021-06-14, Derek Ignatius Asirvadem wrote:
> >
> > 1. Triggers are simply stupid, maintenance is horrendous. Total
> > unnecessary in the real world of Open Architecture,
> > [...]
> > Using Functions that are called by CHECK Constraints is pedestrian,
> > the world has been doing it since 2007. CHECK can only check the
> > inserted row. CHECK+Function can check other rows in the table, or any
> > other table.
>
> For platforms where that is true, I totally agree.
>
> I can code that way in PostgreSQL, and it is an elegant approach. It
> even appears to work in simple tests, which is nice because I can try
> that approach. Unfortunately, it is not really supported (it fails
> under concurrent loads

So you can’t code that way in PusGres. Don’t contradict yourself. Don’t kid yourself, it is a single-user flatporn.

> —have fun with this comment), so triggers are the only
> alternative in most cases. But you get what you pay for, right?

As long as you don’t lie to yourself and say that that is SQL, fine with me.

And is something doesn’t work, don’t lie to yourself and say that SQL is broken, say that your o.o.s pretend sql can’t do that.

> On the other hand, a *disciplined* use of triggers, restricted to such
> constraints, is fine. It achieves the same results as declarative
> constraints, with some added flexibility and efficiency, because you can
> specify exactly which operations on which columns under which conditions
> should fire the trigger. Error reporting is just as expressive, or more
> (custom messages). The trigger shows up just as a constraint associated
> to a table when you inspect the table metadata. Just slightly more
> verbose to write.

Just don’t say that that is SQL.

Have fun with the 100% rewrite, when you move to an SQL platform.

> Same as using prefixes because you have no schemas. Not ideal, but still
> workable.

Having no schemas is like having no herpes. Tell me, give me one good reason why I should duplicate some subset of my 100% Normalised DDL, which achievement was not small. Think: Data Modeller and DBA of a financial database of 500 tables in four physical databases. Why should I add this additional layer of abstraction, that I did not need for twenty years before its introduction, to my fully Logical command and control of the database.

And it is not “the same as”. That is comparing something that was produced by design, with something that is an accident, patched up to make it water-tight: such are not comparable. It doesn’t matter that the result looks the same, they are not. The child born out of wedlock is not the same as the child of a sanctified marriage: they look and smell the same, but internally they are quite different. One is the result of Design, it is the effect of a cause, the other is not-caused, an accident of some other intent.

Like the evolutionist fish, that magically developed lungs for no reason, that mystically became advantageous when it mysteriously crawled onto land for no reason, and instantly manifested all the muscles and rib structures required to act as bellows. By accident. I don’t have enough blind faith to believe in such stories.

Cheers
Derek

Derek Ignatius Asirvadem

unread,
Jun 15, 2021, 7:40:30 PM6/15/21
to
Nicola

> On Monday, 14 June 2021 at 01:13:54 UTC+10, Nicola wrote:
> > On 2021-06-13, Derek Ignatius Asirvadem wrote:

> > So in the previous model 12.2 Jun 21, I as DBA policeman re what goes
> > into the db, allowed your Role, but insisted on the required
> > Discriminator Staketype. In the next iteration, I will give
> > ComposerType. (This is modelling, yes, we have many iterations). You
> > tell me what you want.
>
> The latest revision, with ComposerType, is fine.

AFAIC, the data modelling is not quite complete, not resolved. I was expecting a response from you to continue, and bring it to a conclusion. Eg. the obvious evaluation as the DMs stand is, to progress both yours and mine. Not is the sense of competition, but for understanding.

Eg. in mine, if Composer and Solution have common attributes (other than that in Person), that would be Normalised into a Subtype cluster. Which leads to yours. If not, four tables are the irreducible (“non-redundant”) set.

Please comment.

Cheers
Derek

Nicola

unread,
Jun 16, 2021, 5:35:35 AM6/16/21
to
I think I have already justified my preference for my version, which is
what you are saying: I'd use a cluster because that allows me to extend
the model with information common to both composers and solvers.

Another reason is that the exclusivity between composers and solvers is
more explicit (because of exclusive subtyping), although
implementation-wise Solver_Not_Composer would not be a very different
constraint. Btw, wouldn't you also need a Composer_Not_Solver constraint
associated to Composer?

Nicola

Derek Ignatius Asirvadem

unread,
Jun 16, 2021, 6:51:36 AM6/16/21
to
Nicola

> On Wednesday, 16 June 2021 at 19:35:35 UTC+10, Nicola wrote:
> > On 2021-06-15, Derek Ignatius Asirvadem wrote:
> >
> > AFAIC, the data modelling is not quite complete, not resolved. I was
> > expecting a response from you to continue, and bring it to
> > a conclusion. Eg. the obvious evaluation as the DMs stand is, to
> > progress both yours and mine. Not is the sense of competition, but
> > for understanding.
> >
> > Eg. in mine, if Composer and Solution have common attributes (other
> > than that in Person), that would be Normalised into a Subtype cluster.
> > Which leads to yours. If not, four tables are the irreducible
> > (“non-redundant”) set.
>
> I think I have already justified my preference for my version, which is
> what you are saying: I'd use a cluster because that allows me to extend
> the model with information common to both composers and solvers.
>
> Another reason is that the exclusivity between composers and solvers is
> more explicit (because of exclusive subtyping), although
> implementation-wise Solver_Not_Composer would not be a very different
> constraint.

Exactly. We have proved, at least in this classroom exercise, that there is more than one way to model the data correctly, the difference being how each of us perceives the facts in reality (not the “universe of discourse” God help me, not our glorious perception [such as the OO/ORM/OOP crowd) ).

I doubt there will be common attributes for Composer and Solution, because each of those are roles; acts of Persons, which is where there commonality is.

Let’s say we add Critic. In both yours and mine, we add one table, that is an easy extension of the existing facts.

> Btw, wouldn't you also need a Composer_Not_Solver constraint
> associated to Composer?

Why ?

Cheers
Derek

Nicola

unread,
Jun 16, 2021, 3:02:53 PM6/16/21
to
On 2021-06-16, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:

>> Btw, wouldn't you also need a Composer_Not_Solver constraint
>> associated to Composer?
>
> Why ?

Consider this:

1. Insert Alice and Bob.
2. Insert bridge problem B with (primary) composer Alice.
3. Insert solution of B by Alice.

(Prevented by Solver_Not_Composer)

4. Insert solution of B by Bob.

5. Insert secondary composer Bob for problem B

What does prevent 5?

Nicola

Derek Ignatius Asirvadem

unread,
Jun 16, 2021, 5:32:55 PM6/16/21
to
> On Thursday, 17 June 2021 at 05:02:53 UTC+10, Nicola wrote:
> > On 2021-06-16, Derek Ignatius Asirvadem wrote:
>
> >> Btw, wouldn't you also need a Composer_Not_Solver constraint
> >> associated to Composer?
> >
> > Why ?
> Consider this:
>
> 1. Insert Alice and Bob.
> 2. Insert bridge problem B with (primary) composer Alice.
> 3. Insert solution of B by Alice.
>
> (Prevented by Solver_Not_Composer)
>
> 4. Insert solution of B by Bob.
>
> 5. Insert secondary composer Bob for problem B
>
> What does prevent 5?

Chronology and Sanity.

Well, the bridge problem was composed, and published, and only then solved. One can’t go back and change history (assert that there was a oh, oh, oh, second composer that was not recorded). It is invalid. That is the same as saying that the bridge problem (in Problem) has changed after the solution has been registered, just as invalid. Try registering a child to a parent and then asserting that the parent is childless.

If [5] is allowed that would make the composition (Problem + Composers] that Bob solved *NOT* the composition that Bob solved. Done properly, you would have to
-- delete [4] -- to allow a valid change to composition
-- insert [5] -- now the composition is correct
-- if the new composition is in fact solved by Bob
---- insert [4]

This is a common problem with academics, due to their programming, and it shows up in the entire PusGress world. They are so programmed to think in (a) fragments), and (b) that a thing is only-one-way, therefore you must implement the thing the other way in order to be complete. Date & Darwen specifically teach that a FK relation must be implemented in the child (correct) and in the parent (hysterically incorrect, and even SQL is not so stupid). That forms a circular reference on every relation between tables. Now they have trained the masses to think that (c) SQL is stupid, and (d) circular references are normal. Of course, the insane say that only “deferred constraint checking” can solve that problem, that the sane do not have. So then (e) they petition the SQL Committee to include “deferred constraint checking” as a requirement.

Typical snake oil racketeer. First they sell you the cancer, and then they sell you the one-and-only cure-all, that they themselves manufacture. Don’t buy the cancer, and you won’t need the cure-all. But if you do buy the cancer, when the pain is great enough, determine the cancer, and remove it. Don’t buy the cure-all.

(The “vaccine” CCP Virus is not vaccine by the definition of vaccine, the use of the label is fraudulent. It is an mRNA Transmitter, a completely new kind of treatment: gene therapy. When that is understood, the side-effects can be understood, because they are not side-effects of a vaccine.)

Obviously, I am not saying that you are doing that particular erroneous thing. I am saying that all academics including you are trained in that insane thinking: that things are fragments; that things are one-way, not safe; that you always have to look for completing the thing by doing it the other way.

The cure is understanding that things are not fragments, they are atoms, made up of fragments. As per the previous discussion. A bridge problem is not a fragment in Person, nor a fragment in Composer, but an Atom in (Person + Composer). You can’t change that Atom after you publish it and open the problem to solutions: if you do, you have to retract and re-publish. And you definitely can’t change the Atom after you have accepted even one Solution.

Unless you have been poisoned by the insane mindset of the droolers: Date & Darwen, and degraded, to think in terms of (i) denying the Atoms, and (ii) seeing only isolated fragments, that (iii) need double and triple definition.

The set of ACID Transactions that make up that database is not shown. As discussed severally, the database is incomplete without them. All such (as above) issues and nuances are easily covered in the Transactions that have to exist.

Let’s say the bridge problem has a textual Description.

(Composition_Add_tr is the same structure as OrderSaleItem_tr discussed previously. Item lines are added singly, not en masse. For both GUI sanity, and OLTP [low contention], and ACID reasons. There is no “add header” transaction or concept, that is done with the first line. As per the “1” in Predicate[ Problem has 1-to-n Composers ], which constraint is a Transaction constraint.)

Composition_Add_tr ( parm list is 1NF )
-- if Problem[ @ProblemName ] EXISTS
---- if Solver[ @ProblemName ] NOT EXISTS
------ INSERT Composer[] -- the second
-- else
---- INSERT Problem[]
---- INSERT Composer[] -- the first

Composition_Mod_tr ( parm list is PK + changeable attributes only )
-- if Problem[ @ProblemName ] EXISTS
---- if Solver[ @ProblemName ] NOT EXISTS
------ if @Description
-------- UPDATE Problem.Description

Composition_Drop_tr ( parm list is PK only )
-- if Problem[ @ProblemName ] EXISTS
---- if Solver[ @ProblemName ] NOT EXISTS
------ DELETE Composer[ @ProblemName ]
------ DELETE Problem[ @ProblemName ]

------

Years ago, we discussed that in a genuine Relational database (Codd, not the freaks), DKNF is normal, almost pedestrian, I provide it always. Not the insane R Fagin definition, but the obvious Codd intent, an ordinary progression of Codd’s 3NF (FFD only), and Atomicity. I don’t think you understood it then. I think the understanding is growing in you now. The above is just an example.

Cheers
Derek

Nicola

unread,
Jun 17, 2021, 4:37:51 AM6/17/21
to
On 2021-06-16, Derek Ignatius Asirvadem <derek.a...@gmail.com> wrote:
>> On Thursday, 17 June 2021 at 05:02:53 UTC+10, Nicola wrote:
>> > On 2021-06-16, Derek Ignatius Asirvadem wrote:
>>
>> >> Btw, wouldn't you also need a Composer_Not_Solver constraint
>> >> associated to Composer?
>> >
>> > Why ?
>> Consider this:
>>
>> 1. Insert Alice and Bob.
>> 2. Insert bridge problem B with (primary) composer Alice.
>> 3. Insert solution of B by Alice.
>>
>> (Prevented by Solver_Not_Composer)
>>
>> 4. Insert solution of B by Bob.
>>
>> 5. Insert secondary composer Bob for problem B
>>
>> What does prevent 5?
>
> Chronology and Sanity.

Ok, understood. As you say:

> The set of ACID Transactions that make up that database is not shown.
> As discussed severally, the database is incomplete without them.

Which leaves room for speculation about whether you forgot a constraint
or it was done intentionally.

>All such (as above) issues and nuances are easily covered in the
> Transactions that have to exist.

Ok.

Nicola

Derek Ignatius Asirvadem

unread,
Jun 17, 2021, 5:56:40 AM6/17/21
to
> On Thursday, 17 June 2021 at 18:37:51 UTC+10, Nicola wrote:
> > On 2021-06-16, Derek Ignatius Asirvadem wrote:
> >> On Thursday, 17 June 2021 at 05:02:53 UTC+10, Nicola wrote:
> >> > On 2021-06-16, Derek Ignatius Asirvadem wrote:
> >>
> >> >> Btw, wouldn't you also need a Composer_Not_Solver constraint
> >> >> associated to Composer?
> >> >
> >> > Why ?
> >> Consider this:
> >>
> >> 1. Insert Alice and Bob.
> >> 2. Insert bridge problem B with (primary) composer Alice.
> >> 3. Insert solution of B by Alice.
> >>
> >> (Prevented by Solver_Not_Composer)
> >>
> >> 4. Insert solution of B by Bob.
> >>
> >> 5. Insert secondary composer Bob for problem B
> >>
> >> What does prevent 5?
> >
> > Chronology and Sanity.
>
> Ok, understood. As you say:
>
> > The set of ACID Transactions that make up that database is not shown.
> > As discussed severally, the database is incomplete without them.
>
> Which leaves room for speculation about whether you forgot a constraint
> or it was done intentionally.

That means you do not trust me.

Intentionally. Do you think I could give such detailed reasoning for anything but a known set of problems ? Do you not accept /Chronology and Sanity./ ?

That is a typical class of errors, that “university educated” people make, I have corrected hundreds. Actually two classes of common error. The first is trying to rewrite history. Dishonesty without realising that it is. The second is as described in the post.

Eg. an Exposure table PK ( SecurityId, Date ) = ExposureOpening, ExposureClosing. The idiot did not understand that yesterdays closing Exposure is todays opening Exposure. I removed ExposureOpening, and renamed ExposureClosing to Exposure. Halved the table size. Then taught him how to write a subquery in SQL, to get yesterdays Exposure “side-by-side” with todays Exposure.

Eg. a table for the distance between to GeoLocation points (or two ZipCodes, as I did again, recently). PK ( GeoPoint_1, GeoPoint_2 ). The darling child knew he had a big problem but he was clueless as to how to fix it. When I diagnosed it as Normalisation error, he was completely baffled. I was there for just one day to fix an unrelated performance problem on the Production server, no time for explanations. So with permission, I just implemented the Constraint and reloaded the data. Halved the table size.

The point is, to catch all the errors of the totally redundant “two-way” implementations by newbies, and cancel/delete/remove/destroy-with-fire one half. Diagnosis is looking for constraints that apply to fragments rather than atoms. I gave the explanation because I thought, that is what you did. On my side, I am quite used to *NOT* doubling up in the first place.

In any case, all such errors (omissions; accidents; whatever) are exposed in User Acceptance Testing.

We have at least four formal environments in any decent corp:
- Development Server (usually doubles as Disaster Recovery server for Production)
--- Development Rdb (next version, currently being written)
--- Test Rdb (formal test environment for Dev_Rdb, loaded with some Prod data, power user access)
- Production server (heavily secured and optimised)
--- Production Rdb
--- User Acceptance Testing Rdb (next version, that passed Test, with 100% copy of current Production data)

> > The set of ACID Transactions that make up that database is not shown.

Ok. Doc updated. Some, not all Transactions shown.

Cheers
Derek
0 new messages