Now i like to know can i use cache update? if can't please tell me some
situation that i can use cache update! or you mean only for master-detail
relationship?
Can i use cache update only for detail? in MD relationship?
thanks a lot !
As I said before, I recommend against using cached updates anywhere.
There are several reasons for this:
o Speed problems can generally be fixed without the use of cached
updates.
o Use of TClientDataset is a better solution for "offline" browsing,
IMHO.
o Most users aren't prepared to deal with resolving multiple errors
from a long data-entry/modification session.
o In IBX, unlike the BDE, you don't need CU to be able to write your
own INSERT/UPDATE/DELETE statements.
o The server should always be the final arbiter of what is and is not
allowed into the database, and users should not, IMHO, be fooled into
thinking that a record has been accepted when it will be subsequently
rejected by the server.
That said, in IBX (unlike the BDE, where you have to deal with Paradox
errors) cached updates do work. However, they are tricky in
master-detail situations where the detail dataset is linked to the
master dataset by means of a TDatasource. In this case, you need to
either remove the TDatasource and handle the link manually, or
temporarily disconnect it before applying updates.
But I do recommend you avoid using cached updates at all.
HTH, and please feel free to clarify your question if I'm missing your
point,
-Craig
--
Craig Stuntz Vertex Systems Corporation
Senior Developer http://www.vertexsoftware.com
Delphi/InterBase weblog: http://delphi.weblogs.com
> o Use of TClientDataset is a better solution for "offline" browsing,
> IMHO.
> o Most users aren't prepared to deal with resolving multiple errors
> from a long data-entry/modification session.
I think the second point still stands if you use ClientDatasets, right?
> o The server should always be the final arbiter of what is and is not
> allowed into the database, and users should not, IMHO, be fooled into
> thinking that a record has been accepted when it will be subsequently
> rejected by the server.
Same observation
> That said, in IBX (unlike the BDE, where you have to deal with Paradox
> errors) cached updates do work. However, they are tricky in
> master-detail situations where the detail dataset is linked to the
> master dataset by means of a TDatasource. In this case, you need to
> either remove the TDatasource and handle the link manually, or
> temporarily disconnect it before applying updates.
Even then, i prefer using one of these techniques (CachedUpdates, CDS) when
in M/D entry forms. It is simpler to deal with the case of Canceling all
entry. If you use transactions you block other workstations until you
finish.
> But I do recommend you avoid using cached updates at all.
Why are you so, i don't know jow to say it, 'drastic'? I think it would be
better to leave Cached Updates and replace them by CDS due to incomming
DBExpress, but for the time being i am happy with CachedUpdates as they work
for me.
Ernesto Cullen
Yes. However, there are many times when you want to keep a dataset
open in memory for a long time and not modify it -- this is what I meant
by "offline" browsing; sorry if that wasn't clear. In general, I don't
like accumulating large numbers of updates/inserts/deletes with any
client-side tool. That said, however, CDS has built-in features to
facilitate the resolution of update conflicts, so if you *must* do this,
it's somewhat easier than with CU.
> > But I do recommend you avoid using cached updates at all.
>
> Why are you so, i don't know jow to say it, 'drastic'?
Because my first experience attempting to use them was with the BDE,
and I've been bitter about it ever since. :)
In all seriousness, I've never found a good use for them that couldn't
be better implemented other ways. I'll remain open to the idea that
there is some good use for CU, but I haven't found it yet. I'm
certainly not saying that they should be dropped from IBX -- that would
break existing code.
NOT using cached updates, what happens when you execute
IBQuery->Insert();
???
It is an empty row being inserted, and won't be populated until the data
aware controls are populated by the user or somewhere else by code in the
program.
So, somewhere else in that same program code like this:
IBQuery->ApplyUpdates();
IBDatabase->CommitRetaining();
should appear.
It's true that the program MUST watch for update errors and solve the
problem for the user (maybe the combination of values in the columns
violates integrity or duplicates a unique value, etc.)
But, other than NOT using data aware controls, getting data from the Data
Unaware Controls and setting the parameters to an insert query, how would a
decent database program be written?
I have a large number of applications smoothly running right now (about
80-100) with BDE/ODBC and some being written to work with Interbase with
IBX. They behave safely (they do what the user intends to do, and tell
her/him when it was NOT possible to do what she/he wanted).
So, what am I missing? They ALL use cached updates... In other words, users
have been trained to All or Nothing work events, so as to keep consistency
in the database.
Franz J Fortuny
The Dataset.State changes from dsBrowse to dsInsert, and a blank row is
created *on the client.* Nothing happens on the server. The best way
to understand this is to hook up a TIBSQLMonitor and watch what's going
on. The INSERT SQL statement does not occur on the server until you
Post.
> It is an empty row being inserted, and won't be populated until the data
> aware controls are populated by the user or somewhere else by code in the
> program.
Right, but the empty row is not on the server (yet).
>
> So, somewhere else in that same program code like this:
>
> IBQuery->ApplyUpdates();
> IBDatabase->CommitRetaining();
>
> should appear.
Almost. ApplyUpdates is not necessary when not using cached updates.
CommitRetaining or Commit should appear -- most people do this
immediately after Posting.
> It's true that the program MUST watch for update errors and solve the
> problem for the user (maybe the combination of values in the columns
> violates integrity or duplicates a unique value, etc.)
When not using CU, the server will do this for you -- makes maintenance
much simpler, IMHO. And the constraint checks are much faster since
they use indices.
> But, other than NOT using data aware controls, getting data from the Data
> Unaware Controls and setting the parameters to an insert query, how would a
> decent database program be written?
I'm not sure I understand your question -- I don't think there's
anything wrong with data-aware controls.
> I have a large number of applications smoothly running right now (about
> 80-100) with BDE/ODBC and some being written to work with Interbase with
> IBX. They behave safely (they do what the user intends to do, and tell
> her/him when it was NOT possible to do what she/he wanted).
That's good. :)
> So, what am I missing? They ALL use cached updates... In other words, users
> have been trained to All or Nothing work events, so as to keep consistency
> in the database.
You may not be missing anything. A well-behaved program is a
well-behaved program. But what happens when you change a constraint on
the server -- do you have to change your client as well? That could be
a lot of maintenance.
My principle objection to CU, to summarize, is that I'm not aware of
anything which can be done with CU which can't be handled (IMHO, better)
with other means.
thanks for respond and helping!
"Iwan Haryadi" <send...@telkom.net> escribió en el mensaje
news:3a9b04ec$2_2@dnews...
I have extensively used in ODBC/BDE projects the following construct:
TDatabase *db;
TQuery *q1, *q2, *q3;
.....
try
{
db->StartTransaction();
q1->ApplyUpdates();
q2->ApplyUpdates();
q3->ExecSQL();
db->Commit();
}
catch(Exception &e)
{
db->Rollback();
// tell the user something that shouldn't happen has happened
}
the above has consistently (for 3 years now) generated 100% trustable
transactions, in which all 3 TQuery objects (or more) MUST be either all
applied or all rolled back.
I have NOT tested this using IBX and Interbase. The above has been tested
using ODBC/BDE.
BTW, I don't know if the above will work with Interbase.
I suspect the above construct is NOT POSSIBLE with Interbase /IBX.
The program MUST have an active transaction, so you don't START a
transaction and then do the apply updates (with IBX). Not possible. The
transaction has already started long before you get to the try {} catch(...)
{} construct.
But you do db->Commit() or db->CommitRetaining() (with Interbase).
Why would anybody want to have all three operations applied or rolled back?
The number of reasons is infinite, but one very simple is:
q1 = affects the totals for a cash register program
q2 = affects the totals for a credit card system (a debit is generated)
q3 = affects the rows of those articles sold
q4 = affects the quantity on hand of articles sold...
And, of course, either all of the above are committed, or NONE. You don't
want to charge a customer for something that won't show in the cashier's
report. You don't want to diminish or not diminish the quantity on hand of
products that finally did not leave the store or left the store... You want
the system to always reflect a CONSISTENT image of reality. That's why that
kind of transaction should be used and should be easily handled with any
language or database management objects.
This is a question for the IBX gurus: Is the above possible?
It should be. It MUST be, if IBX / Interbase is to be considered a serious
commercial product. Interbase is supposed to have the benefit of being able
to either commit or roll back across databases: that is, one transaction
object can be used as THE transaction object of queries from two databases.
If one of the databases fails, the other will be rolled back along with the
one that failed. That's what is called a "Two Phase Commit".
Please, IBX / Interbase Gurus, clarify this!
Franz J Fortuny
Sure. There are two ways you can do what you want:
1. Have all three queries hooked up to the same transaction at the
outset, or
2. Either use Cached Updates combined with an OnUpdateRecord event or
TClientDataset/TDatasetProvider combined with a BeforeUpdateRecord to
farm out the updates to a different dataset.
Either one will work.
> It should be. It MUST be, if IBX / Interbase is to be considered a serious
> commercial product. Interbase is supposed to have the benefit of being able
> to either commit or roll back across databases: that is, one transaction
> object can be used as THE transaction object of queries from two databases.
> If one of the databases fails, the other will be rolled back along with the
> one that failed. That's what is called a "Two Phase Commit".
This is also supported in IBX. But it's different than Cached Updates.
HTH,
-Craig
--
Craig Stuntz (TeamB) Vertex Systems Corporation