Referential Integrity in MV systems

296 views
Skip to first unread message

Kevin Powick

unread,
May 5, 2015, 4:57:50 PM5/5/15
to mvd...@googlegroups.com
As the concern of companies and regulators grows for improved data security and integrity, MV seems to face challenges due to the lack of built-in referential integrity (RI).  Certainly, file triggers have been around for years, but are developers actually using them to ensure data integrity?  And what about the cost to performance?

Perhaps performance costs for triggers are in-line with relational systems, so that it becomes a moot point.  I know of some large relational databases where performance is so critical that they do not use any RI features of the database.  Our own use of triggers is rather minimal and does not impact the low-volume files on which they are used.

It's fine to say that RI in MV is mostly maintained at the application level, but that can require the developer to have a lot of knowledge before being able to safely run programs against the database.  I don't know about you, but I've run across (and maybe even written) more than a few programs with names like REPOST.[screwed-up filename].

It would be interesting to hear what others are doing for RI in MV, and what challenges they've faced with performance as well as any compliance issues.

--
Kevin Powick

Tony Gravagno

unread,
May 6, 2015, 7:44:44 PM5/6/15
to mvd...@googlegroups.com, kpo...@gmail.com

As you allude, there is a fundamentally different mindset between MV and Relational, where we build the RI into the application, while RDBMS RI is at the DBMS level. But how well do we actually implement this responsibility that we gladly accept? I rarely see RI centralized in MV apps, with a single common subroutine for CRUD for a specific file or application group of files. The result is that most data issues that I see come from an update in one place not reflected in another. I think this is because MV people simply don't think about RI as a concept but more as an obligation to just get it right. If we thought in terms of RI, I think people would immediately modularize all of their CRUD code.

How then would we invoke that modularized code? Whether modularized or not, developers call from the application to their file update code, and set a flag if they realize that something has gone wrong. Hopefully that realization comes before a bunch of items have already been written, requiring a transaction reversal. This comes back to recent comments about how seldom anyone uses transaction bracketing. Compare that application-oriented validation to updating a file with the expectation that the update will be rejected if there is a problem. I've never seen anyone use the On Error clause of the Write and Delete statements, and I rarely ever see triggers being used. To summarize for those wondering what I'm talking about, you put a trigger on a d-pointer, and all write/delete statements will force that code to be executed - if there is a problem the trigger code can throw an error, and the On Error clause acts like an Else so that the application code can detect that the update failed. A trigger can also set a value in named common which can be checked, so you can do a normal Write and then check to see if a flag like FailureText was set.

I think the reason why people don't use triggers is that there's no marketing for it. It's one of those post-R83 features that most people don't understand and therefore don't use. If this and other features were profiled in articles, and really encouraged by the DBMS vendors, then I think we would see more uptake. But with the functionality being obscure, not too well documented, and rarely discussed, it's just not sitting up there at the top of the BASIC toolkit like Locate, OConv, Execute, etc.

About trigger performance, this is something we learn about in the school of hard knocks. The developer needs to remember that every update will go through this one bit of code, so they must optimize as much as possible. That means relying on named common to maintain state, another not-too-well used feature. It means avoiding redundant file-open operations, re-reading of static data, and Executing queries that take more than a second. (Considerations for using triggers are similar to that for web services.) It also means recognizing when a mass delete or update is being performed, and perhaps disabling the trigger for the term of that one operation. I don't think there are any issues with the performance of triggers themselves, but abstracting file IO to a trigger can easily allow someone to forget how much is going on with every file access - and that can lead to performance issues.

Triggers are also a problem in that they are different for every platform, and within each platform they are subject to bugs. With an RDBMS it's simple - all queries can fail. RDBMS users know this and plan for it, and RDBMS providers test the heck out of this. With MV where standards collide with platform distinctiveness, and where this feature isn't as critical, the result is that people simply choose not to use something that isn't portable and isn't guaranteed to work anyway.

Finally, as to compliance - yes, the only shops where I see extensive use are those which deal with health and money (specifically as their line of business compared to common accounting) : that includes collection agencies, physician billing, etc. These companies are subject to HIPPA, SOX, and other regulations which require them to know who and how data was queried and updated. It's more natural for them to look to triggers than the application, not only because triggers allow them to catch every read/write from BASIC, editor, etc, but also because auditors understand the concept of triggers, and if they know triggers are being used they won't ask more questions about why this MV thing is so different.

Verbosely yours,
T

Wols Lists

unread,
May 7, 2015, 7:03:05 AM5/7/15
to mvd...@googlegroups.com
On 07/05/15 00:44, Tony Gravagno wrote:
>
> As you allude, there is a fundamentally different mindset between MV and
> Relational, where we build the RI into the application, while RDBMS RI
> is at the DBMS level. But how well do we actually implement this
> responsibility that we gladly accept? I rarely see RI centralized in MV
> apps, with a single common subroutine for CRUD for a specific file or
> application group of files. The result is that most data issues that I
> see come from an update in one place not reflected in another. I think
> this is because MV people simply don't think about RI as a concept but
> more as an obligation to just get it right. If we thought in terms of
> RI, I think people would immediately modularize all of their CRUD code.

And as I see it, a well-designed MV app gets "real world" RI for free.
If I have a FILE for cars, I delete a record, and all information about
that car is gone. I create a record and placeholders for all attributes
are created.

What we don't have is "model" integrity - which in the relational world
often causes people to enter fake data because the system won't accept
the real data otherwise.

Take my CARS file for instance. I may have a field called OWNER. In a
relational database, you'll often have a link between two tables that RI
enforces. But what happens if the car is brand new and hasn't been sold?
Or if the owner dies?

In MV in the second case, it's normal to delete the owner and yes, that
causes a data integrity problem in that the database expects the car to
have an owner. But it flags up a real-life integrity issue, in that the
car really does not have an owner, and something needs to be done over
and above fixing the data base!

MV enforces object integrity by default. It does not enforce arbitrary
policy integrity. Imho, that's an advantage, in that we better match the
real world.

Cheers,
Wol

Kevin Powick

unread,
May 7, 2015, 11:14:44 AM5/7/15
to mvd...@googlegroups.com
On Thursday, 7 May 2015 07:03:05 UTC-4, Wol wrote:
 
MV enforces object integrity by default. It does not enforce arbitrary
policy integrity. Imho, that's an advantage, in that we better match the
real world.


Please.  What does this even mean?  MV doesn't enforce anything by default.  And neither does a RDBMS.

In both systems, it is the developers of the data models that decide what RI is implemented against those models.

In MV, RI is enforced through BASIC, either as user applications, file triggers, and in some systems, the Bridge correlative.

In a RDBMS, RI can be enforced by user applications, but is typically enforced through database provided mechanisms (e.g. foreign key constraints).

The advantage of a MV system that employes file triggers or a RDBMS that uses internal RI mechanisms, is that data integrity is enforced regardless of the means by which one attempts to update the data.

The typical MV system, using RI only enforced by user applications, is not protected from errant code or data altered by other means (ed, wed, up, odbc, etc.). However, the RI of a RDBMS is always enforced, regardless of the program or utility used to alter the data.

My point is not that MV is lacking in its ability to enforce RI. It has powerful file triggers. What it doesn't seem to have is developers that take advantage of triggers.  Why?  Is it due to lack of: Trust? Performance? Experience/knowledge?

--
Kevin Powick






Wols Lists

unread,
May 7, 2015, 11:24:12 AM5/7/15
to mvd...@googlegroups.com
On 07/05/15 16:14, Kevin Powick wrote:
> On Thursday, 7 May 2015 07:03:05 UTC-4, Wol wrote:
>
>
> MV enforces object integrity by default. It does not enforce arbitrary
> policy integrity. Imho, that's an advantage, in that we better match
> the
> real world.
>
>
> Please. What does this even mean? MV doesn't enforce anything by
> default. And neither does a RDBMS.

In an RDBMS the data about a car, let's say, is spread across multiple
tables. If you want to delete that car from the DB you have to delete a
whole bunch of rows across a whole bunch of tables. This is where RDBMS
referential integrity shines - cascading deletes, can't create a row
here without a row there, enforcing foreign keys blah blah blah. All
totally unnecessary in the MV world, because in an RDBMS it's quite easy
to have half a car because somebody forgot to create a bunch of rows in
the relevant tables.

In MV, if your FILE matches a real-world object, that can't happen.
Either you've got a record that references the car, or you haven't.

In other words, just like RDBMSs can't live without query optimisers,
that are a waste of space in the MV world, RDBMSs can't live without
Referential Integrity enforcement, *most* of which is also a waste of
space in the MV world.

Think about my comment about a car having an owner. In the real world,
they are two separate objects, with distinct and separate lifetimes. In
an RDBMS, it would be normal to require that a car has an owner, which
leads to referential NON-integrity (certainly as far as the real world
is concerned) should an event such as the owner dying occur. And the
possibility might not even cross an RDBMS programmer's mind, whereas to
an MV guy it's obvious.

Cheers,
Wol

geneb

unread,
May 7, 2015, 11:27:37 AM5/7/15
to mvd...@googlegroups.com
On Thu, 7 May 2015, Kevin Powick wrote:

> My point is not that MV is lacking in its ability to enforce RI. It has
> powerful file triggers. What it doesn't seem to have is developers that
> take advantage of triggers. Why? Is it due to lack of: Trust?
> Performance? Experience/knowledge?

I think it's because developers may simply not be aware they exist.

g.

--
Proud owner of F-15C 80-0007
http://www.f15sim.com - The only one of its kind.
http://www.diy-cockpits.org/coll - Go Collimated or Go Home.
Some people collect things for a hobby. Geeks collect hobbies.

ScarletDME - The red hot Data Management Environment
A Multi-Value database for the masses, not the classes.
http://scarlet.deltasoft.com - Get it _today_!

Symeon Breen

unread,
May 7, 2015, 12:11:27 PM5/7/15
to mvd...@googlegroups.com
That kind of makes sense perhaps Wol, However on you can create foreign constraints on nullable columns, so the constraint is only in place if it exists. So for your example , owner can be blank, or if it does have an entry it must be a valid owner object identifier.
--
You received this message because you are subscribed to the "Pick and MultiValue Databases" group.
To post, email to: mvd...@googlegroups.com To unsubscribe, email to: mvdbms+un...@googlegroups.com
For more options, visit http://groups.google.com/group/mvdbms

Kevin Powick

unread,
May 7, 2015, 12:13:52 PM5/7/15
to mvd...@googlegroups.com

On Thursday, 7 May 2015 11:24:12 UTC-4, Wol wrote:
 
In an RDBMS the data about a car, let's say, is spread across multiple
tables. If you want to delete that car from the DB you have to delete a
whole bunch of rows across a whole bunch of tables. This is where RDBMS
referential integrity shines - cascading deletes, can't create a row
here without a row there, enforcing foreign keys

Yes, ensuring that data is consistent is exactly what RI and data integrity is all about.
  
All totally unnecessary in the MV world, because in an RDBMS it's quite easy 
to have half a car because somebody forgot to create a bunch of rows in
the relevant tables.

How can it be easy to "forget" to add data to necessary tables if a RDMBS uses RI?  This is completely contrary to what you said about in your first paragraph about how RDMBS RI shines.
 
In MV, if your FILE matches a real-world object, that can't happen.
Either you've got a record that references the car, or you haven't.

In your contrived example maybe.  A real world scenario for that car record might involve other files for parts, sales, purchasing, manufacturing, etc.  So, it's not as simple as saying all data for a car is represented by a single care item in the car-file.

You as the developer must remember intimate details of the relationship between your car-file and other related files in the system. If you forget, or are unaware of such relationships, you can easily destroy the integrity of your data.


In other words, just like RDBMSs can't live without query optimisers,
that are a waste of space in the MV world,

Meaningless in terms of this RI conversation.
 
RDBMSs can't live without
Referential Integrity enforcement, *most* of which is also a waste of
space in the MV world.

It's odd to me that you don't believe RI is important.  Data is data, regardless of the database system in which it resides.  The question is, how do we, as MV developers, ensure that our data accurately reflects the models we've created for it?


Think about my comment about a car having an owner. In the real world,
they are two separate objects, with distinct and separate lifetimes. In
an RDBMS, it would be normal to require that a car has an owner, which
leads to referential NON-integrity

How about an example that has constraints applied to the data, instead of just making up rules that apply to one database type and not the other?  If the car/owner relationship doesn't have to be enforced, it doesn't in either database.  

Let's go the other way.  If a car/owner relationship does have to be enforced,  how are you going to ensure this in MV without using RI triggers?  I'll tell you.  You will have to know/remember this relationship for every program you write that deals with these files.  Not so tough in this contrived example, but once you have a system with hundreds of files and many relationships, it can become a "trick" to keep it all straight in your head.

--
Kevin Powick

Kevin Powick

unread,
May 7, 2015, 12:51:00 PM5/7/15
to mvd...@googlegroups.com
On Thursday, 7 May 2015 11:24:12 UTC-4, Wol wrote:
 
In an RDBMS the data about a car....

I once had a guy tell me that as soon as you bring a car analogy into a debate, you immediately lose the debate. ;)

So, lets consider a different scenario.  You have a MV system with files for customers and orders.  How do you prevent a customer with open orders from accidentally being deleted?

--
Kevin Powick

 

Tony Gravagno

unread,
May 7, 2015, 3:09:43 PM5/7/15
to mvd...@googlegroups.com, ge...@deltasoft.com
That's what I was saying about marketing for the features.
I think many people don't know triggers exist, or if they know the word they don't quite get the end-to-end usage.
Those who do know what a trigger is may not use them simply because they already have the application developed and don't want to shift everything around.
Those considering triggers for new development may have been turned off by prior issues in various platforms. As an example, in D3 triggers depend on FSI/VME details and Flash compilation, the On Error clause is largely unknown and therefore not widely tested in the field, and I'm sure there's confusion about how the InputErr statement (the huh?) is used to signal a trigger failure.

The scenario above isn't unique to triggers, for D3 it applies to OSFI, transaction bracketing, security, macros and paragraphs (major features of other platforms, hardly ever used in D3), job scheduling, performance tuning, shell variables, executing BASIC from the OS, using COM from BASIC, linking third-party DLLs to %functions, sockets, pipes, inter-system communications, and many TCL commands for reporting.

Once again I cite this as being a failure of Marketing departments for the DBMS providers, who don't bother to educate users on product features after the sale has been made. It's not that these people are ignoring this aspect of the products. I think they're completely unaware that this is a necessary part of product Marketing. Martin Phillips published an awesome series of tech notes for QM and that provoked a few similar efforts by other vendors. But this needs to be an ongoing effort on the part of each provider - and I'd hope that the DBMS providers can drive this from the Support and Engineering departments, where people who actually know how the stuff works can explain it. How can these companies possibly assume that people who buy the platform simply know everything about how it works - especially each new feature that comes out? Over the years I tried to tell RD/TL Engineering that they're wasting their time adding new features to D3 if Sales/Marketing/Documentation/Support departments don't collaborate to tell anyone that the features exist or how they work. No docs means no one uses the functionality - and then some features get deprecated because they aren't used. Um, duh. If you want to know the result of that, just look at where D3 has landed in terms of usage and respect in this industry. Rocket can do much better - as can the other DBMS providers.

T


 geneb wrote:

Will Johnson

unread,
May 7, 2015, 3:10:15 PM5/7/15
to mvd...@googlegroups.com
At least in Universe is it not true that if you define all your files as SQL Tables, that the RI will in fact be enforced?

Wols Lists

unread,
May 7, 2015, 3:36:42 PM5/7/15
to mvd...@googlegroups.com
You don't have a delete option? When I wrote an accounts system,
deleting clients was the job of the year-end program. (And it wouldn't
delete a client who had any activity that year - an inactive client
would get zeroed out one year end, then any client that was zeroed out
would get dropped the next year end.)

But I did say *MOST* RI is not needed in a MV system. Integrity *within*
an object tends to happen as a result of half-way decent design with MV.
Integrity *between* objects can be enforced with triggers. Or as I'm
more used to, a simple LIST command could produce an integrity check
report. That can be far more revealing. After all, what's to stop a
clerk entering dummy/fake/wrong data simply to force a data-entry past
an annoying integrity check? That sweeps problems under the carpet,
while the report will (hopefully) get them addressed and fixed.

And what's to stop someone spec'ing the RI the wrong way round so
deleting the customer succeeds and deletes all the open orders :-)

As Tony bangs on about, it's about separating the business logic and the
data entry. Just as a lot of people don't bother spec'ing/doing RI in
MV, I bet a lot of people don't bother spec'ing/doing it in RDBMSs.

As I say, though, MV gives you maybe 90% of the RI of an RDBMS just by
designing the schema properly - you need a lot of integrity checks
within an RDBMS to make sure you don't end up with half a car. With MV,
either the car exists or it doesn't. (That doesn't mean you can't have
wrong data in either version :-)

Cheers,
Wol

Kevin Powick

unread,
May 7, 2015, 3:38:02 PM5/7/15
to mvd...@googlegroups.com

On Thursday, 7 May 2015 15:10:15 UTC-4, Will Johnson wrote:
At least in Universe is it not true that if you define all your files as SQL Tables, that the RI will in fact be enforced?


I hope a UV person will reply.  I do not know.

--
Kevin Powick

Wols Lists

unread,
May 7, 2015, 4:21:56 PM5/7/15
to mvd...@googlegroups.com
On 07/05/15 20:09, Tony Gravagno wrote:
> Those who do know what a trigger is may not use them simply because they
> already have the application developed and don't want to shift
> everything around.

Which is poor vision ... (and bad practice :-)

The only system I developed from scratch was an accounts system, which
did a brute force integrity check and reports for the accountant, part
of the RI was the existence of the printouts :-) which could be
cross-checked.

But if I had ever taken over someone else's program that had any
noticeable problems, my first attack would have been a bunch of listings
to find integrity problems, and be it trigger or idesc, a routine to
intercept writes, do an integrity check, and chuck out a stack dump if
it found any problems. Don't interfere with the write, but give the
programmer a host of debugging info to track down the problem.

Then, if you can afford the performance hit, leave the checks in place
in production and treat any log as a serious error.

Cheers,
Wol

Kevin Powick

unread,
May 7, 2015, 6:01:29 PM5/7/15
to mvd...@googlegroups.com

On Thursday, 7 May 2015 15:36:42 UTC-4, Wol wrote:
 
You don't have a delete option? When I wrote an accounts system,
deleting clients was the job of the year-end program. (And it wouldn't
delete a client who had any activity that year - an inactive client
would get zeroed out one year end, then any client that was zeroed out
would get dropped the next year end.)

That's policy, not a mechanism to prevent accidental deletion by errant code or an uneducated developer.
 

But I did say *MOST* RI is not needed in a MV system. Integrity *within*
an object tends to happen as a result of half-way decent design with MV.

Not really.  How is does one enforce a rule such as the Credit.LImit attribute cannot contain a value less than zero?  It's done with BASIC applications.  And anyone that makes an error in their code, or doesn't "know" the rule can corrupt the integrity of the data.

Integrity *between* objects can be enforced with triggers. Or as I'm
more used to, a simple LIST command could produce an integrity check
report.

A report that someone has to know about, run, read and interpret correctly. Again, this is not a mechanism that prevents integrity problems.  It is a way to look for them. 
 
That can be far more revealing. After all, what's to stop a
clerk entering dummy/fake/wrong data simply to force a data-entry past
an annoying integrity check?

How are required fields any different in MV systems than others?  I'm sure you're not suggesting that clerks enter whatever they wish and supervisors can pour over reports later to find their errors.
 
And what's to stop someone spec'ing the RI the wrong way round so
deleting the customer succeeds and deletes all the open orders :-)

Yes, the RI spec could be wrong, but then only in one place; Not multiple applications implementing different versions of a specification simultaneously.  I think the latter would be a bigger mess and, presumably, more difficult to track down.
 
Just as a lot of people don't bother spec'ing/doing RI in
MV, I bet a lot of people don't bother spec'ing/doing it in RDBMSs.

I'm sure that's true, but RI is definitely more within the culture of RDBMSs.

 
As I say, though, MV gives you maybe 90% of the RI of an RDBMS just by
designing the schema properly

This is really just wishful thinking.  Don't forget that a RDMBS can also define the integrity checks at the field level.  MV requires our programs to do that.  Unless you use triggers, there is nothing stopping you from posting an email address to the account balance attribute.  But at least you can run some reports to find those errors, right?
 
you need a lot of integrity checks
within an RDBMS to make sure you don't end up with half a car. With MV,
either the car exists or it doesn't.

An incorrect and  simplistic view based on a contrived model that suites your, so far, unproven beliefs.

--
Kevin Powick

Peter McMurray

unread,
May 8, 2015, 12:45:27 AM5/8/15
to mvd...@googlegroups.com
Kevin said

"So, lets consider a different scenario.  You have a MV system with files for customers and orders.  How do you prevent a customer with open orders from accidentally being deleted?"
Why would anyone ever write a delete for a customer file or pretty near any other file for that matter. Doesn't anyone have history such as invoices that go way back, tax rules require seven years and I do know of court cases a lot older than that. I have been called on to produce evidence for fraudulent use of Heating Oil in relation to Diesel trucks on the highway for example. - Someone at the relevant department with a brain figured out that a chicken farm that happened to have an associate company with a Sydney - Brisbane freight run has used enough heating oil to boil every chook in China :-)
We set a closed flag on a disused account such as a product price file and associated users such as orders and stock updates will fail at the entry level when the key is checked as it must be every time it is used.
Customer files have a similar key that allows deletion if the customer has never been used such as when initially setup. However once the account has been used it can only be closed and that flag can only be re-opened within a set time frame. A closed account can accept payments if there is a balance due. A maintenance program can be run any time, monthly is normal, that validates the balance against the movements - the same check is done when a user reviews an account history.
All updates to invoices and receipts are done via an accounting batch when the operator is happy that they have got the entries correct. A debtor account should never be updated on commit of the invoice because it is normally milliseconds later that the operator realises that they have the wrong account. COMMIT SHOULD NEVER BE A DEFAULT ON RETURN OR ENTER - I wish an enduring hell to all those clowns that do web forms that way. 

Kevin Powick

unread,
May 8, 2015, 9:29:13 AM5/8/15
to mvd...@googlegroups.com

On Friday, 8 May 2015 00:45:27 UTC-4, Peter McMurray wrote:
Kevin said
 
Why would anyone ever write a delete for a customer file or pretty near any other file for that matter. 

Wrong question.  The question is, how do you prevent it from happening in MV without some type of RI?

Let's face it.  Most MV systems out there in live/production environments do not take advantage of the minimal RI features that some MV implementations provide.  The validity of their data relies 100% upon the accuracy of the MV BASIC programs written to maintain the data.

That's all well and good, and I know we've been doing it that way since day one.  However, as concerns about data integrity/security continue to grow, it becomes more difficult for MV people to answer the tough questions coming from the auditors.

--
Kevin Powick 

Wols Lists

unread,
May 8, 2015, 12:38:43 PM5/8/15
to mvd...@googlegroups.com
On 08/05/15 05:45, Peter McMurray wrote:
> All updates to invoices and receipts are done via an accounting batch
> when the operator is happy that they have got the entries correct. A
> debtor account should never be updated on commit of the invoice because
> it is normally milliseconds later that the operator realises that they
> have the wrong account. COMMIT SHOULD NEVER BE A DEFAULT ON RETURN OR
> ENTER - I wish an enduring hell to all those clowns that do web forms
> that way.

That accounts program I wrote worked that way - by accident rather than
design :-) Because we kept on mislaying supplier invoices, and failing
to bill them on to clients, my "invoices received" program asked,
against every invoice, "is this a subcontractor, who do I bill it on
to", and stuffed it onto a proforma invoice for the client. Come billing
day, the accounts people went through the proformas, checked them, and
committed them.

That one little "bright idea" must have saved the company thousands ...

Cheers,
Wol

Ross Ferris

unread,
May 17, 2015, 7:48:25 PM5/17/15
to mvd...@googlegroups.com, ge...@deltasoft.com
Tony,

a few minor points:

* last time I checked, triggers didn't have to be flash compiled (though may be the case with D3/Windows?)
* changes/updates made by triggers inside transaction bracketing aren't captured as part of the transaction - unless this has changed recently (I didn't test in 10.x)

Ian McGowan

unread,
Jul 29, 2015, 10:03:30 PM7/29/15
to Pick and MultiValue Databases, ge...@deltasoft.com
Personally, this is my answer, with a healthy dose of "working on an application that pre-dates triggers by 30 years" ;-)  To go though an existing mature application and retrofit with constraints and triggers is madness.  As you add new features that require new files, it may begin to make sense, but it's still an uphill battle.

Every MV system I've worked on has had a variety of "verify" programs that attempt to make sure pointers point to the right place, the thing pointed to exists etc.  Comparing that to constraints in a relational system, or waving constraints away as not important is seriously missing the point of how much value those features add from a data integrity POV.  I don't think that constraints or other database level mechanisms get you that much as a security/compliance mechanism. The primary controls auditors want to see is that the application restricts what the end-users are allowed to do.  There's typically someone who can disable the database controls to fix the system, and then you need yet another layer of auditing/security to make sure the watchmen aren't being naughty. Quis custodiet ipsos custodes?

http://www.imperva.com/Products/DatabaseActivityMonitor is a solution to that, typically run and installed by a different group than the DBA's.  This "just" needs to log everything done from outside the application in a tamper-proof way, not try and enforce data integrity.

Jackie Burhans

unread,
Jul 30, 2015, 3:41:28 PM7/30/15
to Pick and MultiValue Databases, wjhons...@gmail.com
See chapter 7, page 13 of the UniVerse SQL Administration for DBAs guide. It talks there about referential integrity.
Reply all
Reply to author
Forward
0 new messages