Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Want to write your SQL statements and even stored procedures in pure C#?

16 views
Skip to first unread message

Chad Z. Hower aka Kudzu

unread,
Jun 14, 2005, 7:50:14 AM6/14/05
to
New article and free source code I've posted/

How would you like to write your SQL statements and even stored procedures in pure C#? For
example:

xQuery.Where = CustomerTbl.Col.NameFirst == "Chad";

The full statement is resolved at compile time, and is type safe. But let's not limit it to something
simple, how about this:

xQuery.Where =
(CustomerTbl.Col.NameFirst == "Chad" | CustomerTbl.Col.NameFirst == "Hadi")
& CustomerTbl.Col.CustomerID > 100 & CustomerTbl.Col.Tag != View.Null;

Look too good to be true? Read on. Not only is this possible, but much more. And did I mention, its
also database independent and can work with any SQL based ADO.NET provider?

http://www.codeproject.com/useritems/CSharpSQL.asp


--
Chad Z. Hower (a.k.a. Kudzu) - http://www.hower.org/Kudzu/
"Programming is an art form that fights back"

Blog: http://blogs.atozed.com/kudzu

Rogas69

unread,
Jun 14, 2005, 11:02:53 AM6/14/05
to
Frankly speaking I wouldn't say it is inline SQL. Just another wrapper.
Peter


> xQuery.Where = CustomerTbl.Col.NameFirst == "Chad";

> xQuery.Where =

Chad Z. Hower aka Kudzu

unread,
Jun 14, 2005, 1:00:32 PM6/14/05
to
"Rogas69" <rogas69@no_spamers.o2.ie> wrote in news:e2rRdHPcFHA.580
@TK2MSFTNGP15.phx.gbl:

> Frankly speaking I wouldn't say it is inline SQL. Just another wrapper.

Actually it can do inline SQL too, but I think you've missed the point.

Its a lot more than another wrapper, we have tons of those already. See the below syntax? Thats C#
and compiled code which later evaluates to SQL for you. Its early bound and typesafe.

>> xQuery.Where =
>> (CustomerTbl.Col.NameFirst == "Chad" | CustomerTbl.Col.NameFirst ==
>> "Hadi")
>> & CustomerTbl.Col.CustomerID > 100 & CustomerTbl.Col.Tag != View.Null;

Rogas69

unread,
Jun 15, 2005, 7:43:10 AM6/15/05
to
No, I reviewed the code and still can't see anything special. The syntax
below is maybe nice, but how many lines of code you have to produce to
enable it? And if database schema changes, you have to rewrite classes like
CustomerTblBase in your example.
By inline SQL i mean something similar to PowerBuilder feature.

Peter

Chad Z. Hower aka Kudzu

unread,
Jun 15, 2005, 11:11:46 AM6/15/05
to
"Rogas69" <rogas69@no_spamers.o2.ie> wrote in
news:#$j#g8ZcFH...@TK2MSFTNGP14.phx.gbl:
> No, I reviewed the code and still can't see anything special. The

Without offending - then you either truly missed the where clause part, or you dont understand what
is going on.

Chad Z. Hower aka Kudzu

unread,
Jun 15, 2005, 11:11:09 AM6/15/05
to
"Rogas69" <rogas69@no_spamers.o2.ie> wrote in
news:#$j#g8ZcFH...@TK2MSFTNGP14.phx.gbl:
> No, I reviewed the code and still can't see anything special. The
> syntax below is maybe nice, but how many lines of code you have to

I had to write 0. Its generated, but if I do want to do it by hand, its easy:

[TableName("Customer")]
public class CustomerTblRow : View.Row {
public DbInt32 CustomerID;
public DbString NameFirst;
public DbString NameLast;
public DbInt32 Tag;
public DbInt32 CountryID;
}
public class CustomerTblCol : View.Columns {
public ColumnInt32 CustomerID;
public ColumnString NameFirst;
public ColumnString NameLast;
public ColumnInt32 Tag;
public ColumnInt32 CountryID;
}

The second class is optional, and only needed if I build the where clauses.

A full example is like this:

[Select("select `CustomerID`, `NameLast` from `Customer`")]
public class CustomerRow : View.Row {
public DbInt32 CustomerID;
public DbString NameLast;
}
[Test]
public void InlineSQL() {
using (Transaction xTx = new Transaction(_DB)) {
using (Query xCustomers = new Query(_DB, typeof(CustomerRow))) {
xCustomers.SelectAll();
foreach (CustomerRow xCustomer in xCustomers) {
int i = xCustomer.CustomerID;
string s = xCustomer.NameLast;
}
}
xTx.Commit();
}
}

> produce to enable it? And if database schema changes, you have to
> rewrite classes like CustomerTblBase in your example.

No no no.. you truly miss the point. The classes remain in sync with the database, and since its all
early bound and type safe, any changes in the database are found and resolved at COMPILE
TIME.

This is based on the newer code since the article, but even in the article its not that different.

Frans Bouma [C# MVP]

unread,
Jun 15, 2005, 1:24:15 PM6/15/05
to

In sync with the database? Just like that? I don't think so. :) Either
you have to update class mappings based on a changed schema, or you
have to migrate the schema based on class changes. I can tell you,
that's far from 'automatic'.

Although I support dyn. sql of course, the problem I have with your
reasoning is that you claim your solution is truly database
independent, though at the same time require things in the schema (like
a horrible sequence table in sqlserver... )

FB

--
------------------------------------------------------------------------
Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com
My .NET blog: http://weblogs.asp.net/fbouma
Microsoft MVP (C#)
------------------------------------------------------------------------

Frans Bouma [C# MVP]

unread,
Jun 15, 2005, 1:25:09 PM6/15/05
to
Chad Z. Hower aka Kudzu wrote:

> "Rogas69" <rogas69@no_spamers.o2.ie> wrote in
> news:#$j#g8ZcFH...@TK2MSFTNGP14.phx.gbl:
> > No, I reviewed the code and still can't see anything special. The
>
> Without offending - then you either truly missed the where clause
> part, or you dont understand what is going on.

I think he means that what you present as an invention is not that
'new', every O/R mapper for the past 10, 15 years uses the technique
you presented (albeit different syntaxis).

Chad Z. Hower aka Kudzu

unread,
Jun 15, 2005, 3:48:32 PM6/15/05
to
"Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
news:xn0e3jba...@news.microsoft.com:
> In sync with the database? Just like that? I don't think so. :)

Yes it is. I think you are not understanding exactly whats going on. I'll be writing more articles soon.
But this "system" is in use in several Fortune 500's, some of which are processing millions of
transactions a day. But what Ive exposed in the article is just a peak at the DAL layer.

> Either
> you have to update class mappings based on a changed schema, or you

They arent class mappings per se - its an object WRAPPER of the DB tables. When you embed
SQL in your app and you change the DB you cannot find mismatches until run time. With this
system, it regenerates the interfaces and all conflicts are found at comile time and then easily
resolved from there.

> have to migrate the schema based on class changes. I can tell you,
> that's far from 'automatic'.

Because you are not seeing the whole picture. Until I finish the other articles, remember that what
you have seen so far is just a peek at the DAL, nothing more. So your assumptions of how its used
in a bigger picture sceanrio are invalid because you are assuming in your system that its a 1:1
replacement for whatever you use now.

> Although I support dyn. sql of course, the problem I have with
> your reasoning is that you claim your solution is truly database

Its fully DB independent. We have systems running on the older version which run on Oracle,
SQL server, DB2, and Firebird. In fact the demo runs on Firedbird and SQL server, and they are
fairly different databases. Firebird is closer to Oracle.

> independent, though at the same time require things in the schema
> (like a horrible sequence table in sqlserver... )

First of all, if you read the article you would note that this is optional and not a requirement.
Second of all - sequences are not horrible, identity fields are the item that is poor. Identity
fields cause bottlenecks in transaction and multitable inserts, etc by forcing record inserts
prematurely. Identity/autoincs are widely regarded as a not very good practice.

Given the choice of implementing autoinc/id fields in other DB's that use sequences, or implenting
sequences in the others, its obvious which choice to use.

Chad Z. Hower aka Kudzu

unread,
Jun 15, 2005, 3:40:40 PM6/15/05
to
"Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in news:xn0e3jbatnzog0001
@news.microsoft.com:

> I think he means that what you present as an invention is not that
> 'new', every O/R mapper for the past 10, 15 years uses the technique
> you presented (albeit different syntaxis).

Well yes, and no. From that view point yes. The idea is nothing new certainly, but the
implementation is quite unique. IMO normal approaches fall far short of the goal. The whole
ability especially on the mapping of conditionals is very unique.

Other DAL's eitehr resort to an ackward syntax, or more commonly allow free text strings and
literals to handle the filters etc, which defeats much of hte usefulness.

Frans Bouma [C# MVP]

unread,
Jun 16, 2005, 3:39:21 AM6/16/05
to
Chad Z. Hower aka Kudzu wrote:

> "Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
> news:xn0e3jbatnzog0001 @news.microsoft.com:
> > I think he means that what you present as an invention is not
> > that 'new', every O/R mapper for the past 10, 15 years uses the
> > technique you presented (albeit different syntaxis).
>
> Well yes, and no. From that view point yes. The idea is nothing new
> certainly, but the implementation is quite unique. IMO normal
> approaches fall far short of the goal. The whole ability especially
> on the mapping of conditionals is very unique.

Mapping of conditionals?

> Other DAL's eitehr resort to an ackward syntax, or more commonly
> allow free text strings and literals to handle the filters etc, which
> defeats much of hte usefulness.

heh, trust me, I've seen a lot of O/R mapper query languages/systems
and yours is far from unique. Neither is mine though, although it's too
type safe.

Chad Z. Hower aka Kudzu

unread,
Jun 16, 2005, 3:50:19 AM6/16/05
to
"Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in news:xn0e3kafd2y474000
@news.microsoft.com:
> Mapping of conditionals?

Where =
(CustomerTbl.Col.NameFirst == "Chad" | CustomerTbl.Col.NameFirst == "Hadi")

& CustomerTbl.Col.CustomerID > aMinID & CustomerTbl.Col.Tag != View.Null;

> heh, trust me, I've seen a lot of O/R mapper query languages/systems
> and yours is far from unique. Neither is mine though, although it's too
> type safe.

The basic part isnt unique - but the syntax and level of integration into C# is. Try writing the above in
any other, and you'll find either an odd syntax that doesnt really represent the way we think about it,
or it will be a free string, which is bad.

The above is C# code, completely resolved at compile time. Its not evaluated until run time though,
and evaluted when its needed. You can even do things like this:

Where = CustomerTbl.Col.CustomerID > aMinID;
if (aNullTagsOnly) {
Where = Where & CustomerTbl.Col.Tag == View.Null;
}

And break it into pieces. If this is not unique as you claim, then please point me to another such
system that offers such a level of integration in C#, aside from C Omega.

Frans Bouma [C# MVP]

unread,
Jun 16, 2005, 4:04:25 AM6/16/05
to
Chad Z. Hower aka Kudzu wrote:

> "Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
> news:xn0e3jba...@news.microsoft.com:
> > In sync with the database? Just like that? I don't think so. :)
>
> Yes it is. I think you are not understanding exactly whats going on.
> I'll be writing more articles soon. But this "system" is in use in
> several Fortune 500's, some of which are processing millions of
> transactions a day. But what Ive exposed in the article is just a
> peak at the DAL layer.

Chad, I know what I'm talking about, you can drop the marketing BS. If
you define a class in code with n fields, mapped on a given table T,
and the table changes (gets a new PK, field type changes, new fields
are added, or: 2 fields are removed), your class in code is out-of-sync
with the table it's mapped on. So you have to update the class in code.
That's not going to be automatic, either the system has to migrate the
meta-data and re-generate the class (as I do) or the developer has to
manually adjust the class.

You can't get away with "but my code will take care of the
differences" as that's not going to work with for example fields which
are removed from the table.

> > Either
> > you have to update class mappings based on a changed schema, or you
>
> They arent class mappings per se - its an object WRAPPER of the DB
> tables. When you embed SQL in your app and you change the DB you
> cannot find mismatches until run time. With this system, it
> regenerates the interfaces and all conflicts are found at comile time
> and then easily resolved from there.

Like I said, every O/R mapper takes care of that at compile time...
to some distinct. Even your code. For example, if you remove a field,
and you have queries defined which use that field, it won't compile,
but that's natural. Type safe query languages make sure errors like
that are found during compile time, but that's nothing new.

> > have to migrate the schema based on class changes. I can tell you,
> > that's far from 'automatic'.
>
> Because you are not seeing the whole picture.

haha, well I think I do, Chad.

> Until I finish the
> other articles, remember that what you have seen so far is just a
> peek at the DAL, nothing more. So your assumptions of how its used in
> a bigger picture sceanrio are invalid because you are assuming in
> your system that its a 1:1 replacement for whatever you use now.

Let me say this: in the past 3 years I've worked full time on one of
the market leading O/R mappers of .NET. You can be sure I've seen every
problem you try to solve as well. I've seen your codeproject article
and in there, I haven't seen anything new nor special. That's ok of
course, though don't try sell known stuff as new things, because that's
not what people here need.

You made a solution for the data-access problem, good. There are
others on the planet who have thought of that solution as well, and
perhaps way before you did.

> > Although I support dyn. sql of course, the problem I have with
> > your reasoning is that you claim your solution is truly
> > database
>
> Its fully DB independent. We have systems running on the older
> version which run on Oracle, SQL server, DB2, and Firebird. In fact
> the demo runs on Firedbird and SQL server, and they are fairly
> different databases. Firebird is closer to Oracle.

'DB independent' is only true in the scope of the abstraction level
the dal engine offers. For example, if you have a like-filter on
sqlserver in your application and you port your app to Oracle, you'd
better have a case insensitive flag in your dal engine or it won't work.

DB independent also suggests that it doesn't matter how the database
schema is constructed, but that's not true. There are always sacrifices
to be made: no bit/unique_identifier types on SqlServer if you want to
use Oracle as well as Oracle doesn't have equivalent types.

> > independent, though at the same time require things in the schema
> > (like a horrible sequence table in sqlserver... )
>
> First of all, if you read the article you would note that this is
> optional and not a requirement. Second of all - sequences are not
> horrible, identity fields are the item that is poor. Identity fields
> cause bottlenecks in transaction and multitable inserts, etc by
> forcing record inserts prematurely. Identity/autoincs are widely
> regarded as a not very good practice.

I was refering to the fact that a sequence table was required on
sqlserver to get autonumber field functionality.

By definition, sequences and identity columns are the same: identity
fields use an internal sequence (the table sequence) but in general
it's the same, with one exception: you can have multiple sequenced
fields per table, well not that you want to use that often, but you can.

I don't see how identity fields cause bottlenecks in transactions: for
sequences you too have to grab the new sequence value. And about
prematurely inserts: I guess you're referring to gaps in the sequence
if a transaction rolls back? With sequences you've the same thing, as
that's a logical outcome of the fact that once a value is generated, it
can't be rolled back as that would be unsafe.

> Given the choice of implementing autoinc/id fields in other DB's that
> use sequences, or implenting sequences in the others, its obvious
> which choice to use.

sequence objects require more maintenance (the source of a sequenced
value is not defined with the destination), and the 2nd db in the
world, DB2 offers both for example, so I don't think Identity is all
that bad compared to sequences.

(I'm not referring to the semantical issue of sequenced values as
PK's).

Chad Z. Hower aka Kudzu

unread,
Jun 16, 2005, 4:29:34 AM6/16/05
to
"Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
news:xn0e3kb5...@news.microsoft.com:
> out-of-sync with the table it's mapped on. So you have to update the
> class in code. That's not going to be automatic, either the system has
> to migrate the meta-data and re-generate the class (as I do) or the
> developer has to manually adjust the class.

But it is automatic. And its not something unique to this system - you merely regenerate the wrappers.
The wrappers are isolated from any user code.

> You can't get away with "but my code will take care of the
> differences" as that's not going to work with for example fields which
> are removed from the table.

No, you miss the point. Nothing magic fixes up the differences. What it does is by early binding -
you detect all problems at compile time rather than *hoping* to find them at run time. Whats
unique about this? Well nothing really from most other similar solutions - except that nearly
every other solution is only early bound on some aspects, but filters etc fail because they use
strings.

> Like I said, every O/R mapper takes care of that at compile
> time...

In fact very few do. They do for columns, but most dont for entity names, filters, etc. So they
solve some of the problem, but still leave conflicts to be found at runtime.


> Let me say this: in the past 3 years I've worked full time on one

> the market leading O/R mappers of .NET. You can be sure I've seen
> every problem you try to solve as well. I've seen your codeproject
> article and in there, I haven't seen anything new nor special. That's
> ok of course, though don't try sell known stuff as new things, because
> that's not what people here need.

If you say its not new - please show me another O/R that is fully early bound, filters and all.
Ive seen very few, andt those that I've seen have quite an ackware syntax.

Second, this is not really an O/R although it may appear so. Future articles will clear this up.

> You made a solution for the data-access problem, good. There are
> others on the planet who have thought of that solution as well, and
> perhaps way before you did.

Please point me to one that is 100% early bound, and has a "native" type syntax.

> DB independent also suggests that it doesn't matter how the
> database
> schema is constructed, but that's not true. There are always
> sacrifices to be made: no bit/unique_identifier types on SqlServer if
> you want to use Oracle as well as Oracle doesn't have equivalent
> types.

Yes of course. You have to use a certain amount of commonality. If you use DB specific
column types etc of course it wont port.

> I was refering to the fact that a sequence table was required on
> sqlserver to get autonumber field functionality.

No, again. Its not required anywhere by the framework and the article states that. Its an optional
function that users can use. The framework does not use it anywhere.

> By definition, sequences and identity columns are the same:
> identity
> fields use an internal sequence (the table sequence) but in general
> it's the same, with one exception: you can have multiple sequenced
> fields per table, well not that you want to use that often, but you
> can.

They are similar yes, but identity fields cannot be obtained until after an insert.

> I don't see how identity fields cause bottlenecks in
> transactions: for
> sequences you too have to grab the new sequence value. And about
> prematurely inserts: I guess you're referring to gaps in the sequence
> if a transaction rolls back? With sequences you've the same thing, as
> that's a logical outcome of the fact that once a value is generated,
> it can't be rolled back as that would be unsafe.

Read any of the many articles here regarding the issues with identity fields - they describe the
complexities the add much better than I can. As far as bottlnecks - they can create bottlenecks by
forcing inserts of master rows prematurely and thus forcing premature locks or even transaction
begins.

> sequence objects require more maintenance (the source of a
> sequenced
> value is not defined with the destination), and the 2nd db in the

Trivial compared to the complexity identify fields inject into code.

> world, DB2 offers both for example, so I don't think Identity is all
> that bad compared to sequences.

I dont say they are evil. In fact if I was abel to design it, Id build it so that fields are identity, but
that you can prefetch values. Maybe thats what DB2, does, its been a while since I worked with
DB2. I primarily work with SQL, Oracle, and Firebird.

> (I'm not referring to the semantical issue of sequenced values as
> PK's).

Im not referencing that either. I dont care what the values are, my issue with identity fields is that
they are not available utnil after an insert, and looking at cross DB platforms, sequences are
much easier to simulate than identity fields.

Rogas69

unread,
Jun 16, 2005, 5:23:30 AM6/16/05
to
>
> Without offending - then you either truly missed the where clause part, or
> you dont understand what
> is going on.

Without offending, I have impression that you think that others do not
understand you at all. I say - the syntax of your wrapper although may
resemble SQL is not an inline SQL which you were advertising in the first
post. On the other hand, your code down there concatenates strings so the
difference is only level of invocations until you get to database.

This is an example from your TableView.cs
StringBuilder xSQL = new StringBuilder("INSERT INTO `" + TableName + "` (");
StringBuilder xValues = new StringBuilder(") VALUES (");

There is another thread discussing pros and cons of string concatenation vs
using parameters. No comments.
Peter


Chad Z. Hower aka Kudzu

unread,
Jun 16, 2005, 5:40:56 AM6/16/05
to
"Rogas69" <rogas69@no_spamers.o2.ie> wrote in
news:#3GBJTlc...@TK2MSFTNGP14.phx.gbl:
> Without offending, I have impression that you think that others do not
> understand you at all. I say - the syntax of your wrapper although may

Actually - otehrs dont fully understand it as evidenced by many messages here. Pieces are
understood - but not all aspects. Im taking this input now and rewriting significant portions of the
article.

> first post. On the other hand, your code down there concatenates
> strings so the difference is only level of invocations until you get

The library does - Not the developer. And the developer works only with objects that are typed, and

completely resolved at compile time.

--

Chad Z. Hower aka Kudzu

unread,
Jun 16, 2005, 5:39:32 AM6/16/05
to
"Rogas69" <rogas69@no_spamers.o2.ie> wrote in
news:#3GBJTlc...@TK2MSFTNGP14.phx.gbl:
> first post. On the other hand, your code down there concatenates
> strings so the difference is only level of invocations until you get
> to database.

There are many more differences - its not a matter of how it gets to the database, its a matter of
how the interface is to the developer.

> This is an example from your TableView.cs
> StringBuilder xSQL = new StringBuilder("INSERT INTO `" + TableName +
> "` ("); StringBuilder xValues = new StringBuilder(") VALUES (");
>
> There is another thread discussing pros and cons of string
> concatenation vs using parameters. No comments.

String concatenation is late bound - Im not speaking of performance issues. But of the fact that if
there are database differences, free text strings in developer code are bad as errors will not be
caught until run time. But agian - there are many DAL and typed data readers that can do similar
especially regarding the reading of data. Moving to inserts and updates is not a big leap. The major
"new" item is the use of where clauses such as I posted.

Frans Bouma [C# MVP]

unread,
Jun 16, 2005, 6:58:42 AM6/16/05
to
Chad Z. Hower aka Kudzu wrote:

> "Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
> news:xn0e3kafd2y474000 @news.microsoft.com:
> > Mapping of conditionals?
>
> Where =
> (CustomerTbl.Col.NameFirst == "Chad" |
> CustomerTbl.Col.NameFirst == "Hadi") &
> CustomerTbl.Col.CustomerID > aMinID & CustomerTbl.Col.Tag !=
> View.Null;

oooh, a list of predicates using badly overriden operators. ('|' and
'%' for example are operators of C# already). Also, this doesn't work
in VB.NET, which doesn't support operator overloading (.NET 1.x)

> > heh, trust me, I've seen a lot of O/R mapper query
> > languages/systems and yours is far from unique. Neither is mine
> > though, although it's too type safe.
>
> The basic part isnt unique - but the syntax and level of integration
> into C# is. Try writing the above in any other, and you'll find
> either an odd syntax that doesnt really represent the way we think
> about it, or it will be a free string, which is bad.

Heh, no :)
WHERE clauses are predicate lists. So if I can produce a list of
predicates, I represent the thinking of a WHERE clause.

Without using operator overloading, so it also works in VB.NET, and it
uses objects which care extensible with your own predicate objects (So
you can add a specific SOUNDEX predicate object for example if you want
to): (let's use the most verbose way)
PredicateExpression orFilter = new PredicateExpression();
orFilter.Add(PredicateFactory.CompareValue(CustomerFieldIndex.NameFirst,

ComparisonOperator.Equal, "Chad"));
orFilter.AddWithOr(PredicateFactory.CompareValue(
CustomerFieldIndex.NameFirst, ComparisonOperator.Equal, "Hadi"));
PredicateExpression filter = new PredicateExpression();
filter.Add(orFilter);
filter.AddWithAnd(PredicateFactory.CompareValue(
CustomerFieldIndex.CustomerID, ComparisonOperator.GreaterThan,
aMinID));
filter.AddWithAnd(PredicateFactory.CompareNull(
CustomerFieldIndex.Tag, true));

The fun thing is, I can create factories for these in my app, as
they're just objects. No need for fancy operators.

> The above is C# code, completely resolved at compile time. Its not
> evaluated until run time though, and evaluted when its needed. You
> can even do things like this:
>
> Where = CustomerTbl.Col.CustomerID > aMinID;
> if (aNullTagsOnly) {
> Where = Where & CustomerTbl.Col.Tag == View.Null;
> }
>
> And break it into pieces. If this is not unique as you claim, then
> please point me to another such system that offers such a level of
> integration in C#, aside from C Omega.

This is truly not unique. My predicate objects are checked at compile
time, and evaluated per predicate at runtime. It's not integrated in
the language because it has to work with VB.NET as well.

There are a couple of O/Rmappers which use operator overloading for
filters, though have a complete different syntax on VB.NET, which is
IMHO unacceptable. Also, you re-used existing operators, not what I'd
do.

Chad Z. Hower aka Kudzu

unread,
Jun 16, 2005, 5:43:14 AM6/16/05
to
As per the rewrite, here are two such items that I hope will convey certain things in a more clear
manner.

Ever wished you could truly embed SQL functionality in your C# code without using strings or late
binding? Imagine being able to write complex where clauses purely in C#:

xQuery.Where =
(CustomerTbl.Col.NameFirst == "Chad" | CustomerTbl.Col.NameFirst == "Hadi")
& CustomerTbl.Col.CustomerID > 100 & CustomerTbl.Col.Tag != View.Null;

Look closely. This is C# code, not SQL. Its resolved and bound at compile time, but evaluated at
run time. In this article I shall provide an introduction to this method and full source code for
your use.

---------------------

Indy.Data is neither an O/R mapper, nor a code generator in strict terms. Instead Indy.Data is
something different, and something similar. In this article this will not be completely apparent
yet, so for this article consider Indy.Data to be a Data Access Library (DAL) only.

Indy.Data does support code generation to keep the DAL objects in sync with your database, however
it does not create mappings to business logic, nor does it generate its own SQL during generation.

Indy.Data is not truly a O/R mapper either. Indy.Data is more flexible in that it is not restricted to
parameterized queries or stored procedures. Indy.Data also does not provide for query construction
and mapping, but instead relies on existing objects in the database, or provided SQL.

Indy.Data provides object wrappers on a 1:1 basis for database tables, views, stored procedures, or
SQL statements. These objects can be regenerated at any time to be kept in sync with database
changes. In future articles I will be detailing how Indy.Data can be used to perform the same
functions as a code gen or O/R mapper, but in a slightly different way. However for the scope
of this article assume Indy.Data to be an advanced implementation of an ADO.NET command
object. Where you would use an ADO.NET command object, Indy.Data is suitable.

Frans Bouma [C# MVP]

unread,
Jun 16, 2005, 7:18:28 AM6/16/05
to
Chad Z. Hower aka Kudzu wrote:

> "Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
> news:xn0e3kb5...@news.microsoft.com:
> > out-of-sync with the table it's mapped on. So you have to update the
> > class in code. That's not going to be automatic, either the system
> > has to migrate the meta-data and re-generate the class (as I do) or
> > the developer has to manually adjust the class.
>
> But it is automatic. And its not something unique to this system -
> you merely regenerate the wrappers. The wrappers are isolated from
> any user code.

that's not automatic, you have to re-run a tool. What if I rename a
table, or a field? Then my code breaks as the wrapper class gets a new
name. That's not what a developer wants.

> > You can't get away with "but my code will take care of the
> > differences" as that's not going to work with for example fields
> > which are removed from the table.
>
> No, you miss the point. Nothing magic fixes up the differences. What
> it does is by early binding - you detect all problems at compile time

> rather than hoping to find them at run time. Whats unique about this?

nothing.

> Well nothing really from most other similar solutions - except that
> nearly every other solution is only early bound on some aspects, but
> filters etc fail because they use strings.

no they don't. A lot of the O/R mappers out there don't use strings or
offer an object based query system as well.

> > Let me say this: in the past 3 years I've worked full time on
> > one the market leading O/R mappers of .NET. You can be sure I've
> > seen every problem you try to solve as well. I've seen your
> > codeproject article and in there, I haven't seen anything new nor
> > special. That's ok of course, though don't try sell known stuff as
> > new things, because that's not what people here need.
>
> If you say its not new - please show me another O/R that is fully
> early bound, filters and all. Ive seen very few, andt those that
> I've seen have quite an ackware syntax.

LLBLGen Pro does, and so do a lot of others. For example Genome uses a
similar approach as you do (with overloaded C# operators).

> > You made a solution for the data-access problem, good. There
> > are others on the planet who have thought of that solution as well,
> > and perhaps way before you did.
>
> Please point me to one that is 100% early bound, and has a "native"
> type syntax.

#define 'native'. SQL is set-based, C# is imperative, non-set based.
So what's native? SQL-like, C# like or a language/system which
represents what the SQL elements mean semantically?

> > By definition, sequences and identity columns are the same:
> > identity
> > fields use an internal sequence (the table sequence) but in general
> > it's the same, with one exception: you can have multiple sequenced
> > fields per table, well not that you want to use that often, but you
> > can.
>
> They are similar yes, but identity fields cannot be obtained until
> after an insert.

so? You've to insert the PK side of an FK first anyway, or do you
require that there aren't any FK's?

> > I don't see how identity fields cause bottlenecks in
> > transactions: for
> > sequences you too have to grab the new sequence value. And about
> > prematurely inserts: I guess you're referring to gaps in the
> > sequence if a transaction rolls back? With sequences you've the
> > same thing, as that's a logical outcome of the fact that once a
> > value is generated, it can't be rolled back as that would be unsafe.
>
> Read any of the many articles here regarding the issues with identity
> fields - they describe the complexities the add much better than I
> can. As far as bottlnecks - they can create bottlenecks by forcing
> inserts of master rows prematurely and thus forcing premature locks
> or even transaction begins.

Best practises say that you should FK constraints on live databases.
FK constraints force you to insert the PK side of the FK first anyway,
then sync the value with the FK side and insert the FK side. That's not
tic-tac-toe indeed: you have to sort the objects which have to be
inserted first, and you have to implement synchronization logic so when
I do:

CustomerEntity newCustomer = new CustomerEntity();
newCustomer.CompanyName = "Solutions Design";
//...

OrderEntity newOrder = new OrderEntity();
newOrder.OrderDate = DateTime.Now;
//...
newCustomer.Orders.Add(newOrder);
adapter.SaveEntity(newOrder);

I get the graph saved in the right order: first Customer, then order,
pk of customer, an identity, properly synced with newOrder.CustomerID,
and all in a single transaction.

Where are the bottlenecks? What's inserted prematurely?

> > sequence objects require more maintenance (the source of a
> > sequenced
> > value is not defined with the destination), and the 2nd db in the
>
> Trivial compared to the complexity identify fields inject into code.

I don't see how. Unless you don't use FK constraints.

> > (I'm not referring to the semantical issue of sequenced values
> > as PK's).
>
> Im not referencing that either. I dont care what the values are, my
> issue with identity fields is that they are not available utnil after
> an insert, and looking at cross DB platforms, sequences are much
> easier to simulate than identity fields.

The insert problem isn't a problem, as when FK constraints are used,
you have to insert the PK side of the FK constraint first anyway, so
even with a sequence you first have to insert the PK side, then the FK
side.

Frans Bouma [C# MVP]

unread,
Jun 16, 2005, 7:20:17 AM6/16/05
to
Chad Z. Hower aka Kudzu wrote:

> "Rogas69" <rogas69@no_spamers.o2.ie> wrote in
> news:#3GBJTlc...@TK2MSFTNGP14.phx.gbl:
> > Without offending, I have impression that you think that others do
> > not understand you at all. I say - the syntax of your wrapper
> > although may
>
> Actually - otehrs dont fully understand it as evidenced by many
> messages here. Pieces are understood - but not all aspects. Im taking
> this input now and rewriting significant portions of the article.

Please, come down from your high horse. You only humiliate yourself in
public by your statements.

Chad Z. Hower aka Kudzu

unread,
Jun 16, 2005, 7:42:29 AM6/16/05
to
"Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
news:xn0e3kg8...@news.microsoft.com:
> that's not automatic, you have to re-run a tool. What if I rename
> table, or a field? Then my code breaks as the wrapper class gets a new
> name. That's not what a developer wants.

Actually thats exactly what I want. One of the key issues here is that you have differnet
methodology than I do regarding this. I dont want any magic thing to fix up DB changes. Its rare in our
systems that something is so simple as a name change and whenever something chnages I want to
know and manually fix it. When I get the rest of the articles online, this will make more sense. I
doubt you will agree with the methodology either but you should be able to see the point of view.

>> If you say its not new - please show me another O/R that is fully
>> early bound, filters and all. Ive seen very few, andt those that
>> I've seen have quite an ackware syntax.
>
> LLBLGen Pro does, and so do a lot of others. For example Genome

Via procedures yes. Which is fine if you like that syntax, but personally given the choice as posed
in the other message, I know which one I prefer.

> uses a similar approach as you do (with overloaded C# operators).

URL on Genome please. Google finds DNA stuff. I think I've see Genome in the past at some
point.

> #define 'native'. SQL is set-based, C# is imperative, non-set
> based.

Sorry, in that context it would be C#. Developers are currently working in C# + SQL. To move
towards C# only is a very good goal.

> so? You've to insert the PK side of an FK first anyway, or do you
> require that there aren't any FK's?

Yes, but its a matter of timing. Not all inserts are simple and often child rows must be "worked"
on to prepare or calculate information. With identity fields you must create and repass and fix
up the local references after (which appears to be the common solution here) or put in a delay
between inserts while you work live.

With sequences you lose nothing aside from the fact that you have to populate the field and you
get the value before instead of after. What are the downside that you are concerned with
sequences?

> FK constraints force you to insert the PK side of the FK first anyway,

Its not a matter of order its a matter of timing.

>> Trivial compared to the complexity identify fields inject into code.
>
> I don't see how. Unless you don't use FK constraints.

Read any of the blogs here concerning how to udpate a dataset against an identity column table.
They describe the basic issues pretty well, and thats just for basic simple master detail inserts.

> The insert problem isn't a problem, as when FK constraints are
> used,
> you have to insert the PK side of the FK constraint first anyway, so
> even with a sequence you first have to insert the PK side, then the FK
> side.

Yes, that does not change.

Chad Z. Hower aka Kudzu

unread,
Jun 16, 2005, 7:18:13 AM6/16/05
to
"Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
news:xn0e3kfn...@news.microsoft.com:
> This is truly not unique. My predicate objects are checked at
> compile

Ill post line by line - but this is what I was working on now:

ADO.NET is a good library for connecting to databases. But using ADO.NET still uses the standard
methodology that data connectivity is not type safe, and that the binding to the database is loose.
Fields are bound using string literals, or numeric indexes. All types are typecast to the desired
types. Changes to the database will introduce bugs into the application. These bugs however will
not be found until run time because of the loose binding. Unless every execution point and logic
combination can be executed in a test, often bugs will not appear until a customer finds them.

Because of this, as developers we have been conditioned to never ever change a database. This
causes databases to be inefficient, contain old data, contain duplicate data, and contain many
hacks to add new functionality. In fact this is an incorrect approach, but we've all grown
accustomed to accepting this as a fact of development.

However if we use a tight bound approach as we do with our other code, we can upgrade and update
our database to grow with our system. Simply change the database, and recompile. Your system
will find all newly created conflicts at compile time and the functionality can then be easily
altered to meet the new demand. I call this an "Extreme Database" or XDB, inline with Extreme
Programming or XP.

Using the built in ADO.NET commands reading from a query is as follows:

IDbCommand xCmd = _DB.DbConnection.CreateCommand();
xCmd.CommandText = "select \"CustomerID\", \"NameLast\", \"CountryName\""
+ " from \"Customer\" C"
+ " join \"Country\" Y on Y.\"CountryID\" = C.\"CountryID\""
+ " where \"NameLast\" = @PNameLast1 or \"NameLast\" = @PNameLast2";
xCmd.Connection = _DB.DbConnection;
xCmd.Transaction = xTx.DbTransaction;

IDbDataParameter xParam1 = xCmd.CreateParameter();
xParam1.ParameterName = "@NameLast1";
xParam1.Value = "Hower";
xCmd.Parameters.Add(xParam1);

IDbDataParameter xParam2 = xCmd.CreateParameter();
xParam2.ParameterName = "@PNameLast2";
xParam2.Value = "Hauer";
xCmd.Parameters.Add(xParam2);

using (IDataReader xReader = xCmd.ExecuteReader()) {
while (xReader.Read()) {
Console.WriteLine(xReader["CustomerID"] + ": " + xReader["CountryName"]);
}
}

The same code when written using Indy.Data is as follows:

using (CustomerQry xCustomers = new CustomerQry(_DB)) {
xCustomers.Where = CustomerQry.Col.NameLast == "Hower"
| CustomerQry.Col.NameLast == "Hauer";
foreach (CustomerQryRow xCustomer in xCustomers) {
Console.WriteLine(xCustomer.CustomerID + ": " + xCustomer.CountryName);
}
}

First lets ignore the fact that it is shorter. The standard ADO.NET could be wrapped in an objects
or method as well. What I want to point out is the fact that the standard ADO.NET requires the use
of text strings. In fact, the code listed above has a mistake in it. "@NameLast1" should be
"@PNameLast1". The error in the standard ADO.NET code will not be detected until the code is
executed, and tracing it down will be more difficult. While any mistakes in the Indy.Data code
will be caught immediately by the compiler, and the exact location pinpointed. This allows for
databases to be easily evolved during development, and greatly reduces bugs as mistakes are
found and pinpointed during compile.

Could this be done with a codegen or O/R mapper? Yes, but generally they offer one of the
following:

1) Method calls that create an awkward syntax not representative of the where clause,
especially when complex ones are encountered.

2) Strings arguments, which leaves us again with a late bound interface causing bugs to only be
found at runtime.

3) Database Parameters - Parameters can give type safety and early binding if interpreted
properly which many tools not only do, but rely on this behaviour. The problem is that parameters
severely limit the flexibility of the where clause and often cause many versions of a single
object to be created with many variations of parameters.

With Indy.Data's approach, full freedom is retained to form the where clause as needed and
still retain all the benefits of type safety, early binding, and clean syntax.

Chad Z. Hower aka Kudzu

unread,
Jun 16, 2005, 7:32:37 AM6/16/05
to
"Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
news:xn0e3kg9...@news.microsoft.com:
>> Actually - otehrs dont fully understand it as evidenced by many
>> messages here. Pieces are understood - but not all aspects. Im taking
>> this input now and rewriting significant portions of the article.
>
> Please, come down from your high horse. You only humiliate
> yourself in
> public by your statements.

There is no high horse at all. I suspect you simply misunderstood what I intended here. Its not that
others dont understand the code, surely many here do. But many are not understanding what Im saying
in the article, which I believe much to be my fault in not properly communicating certain items. This
feedback Im using to refine and better communicate certain aspecgts.

Chad Z. Hower aka Kudzu

unread,
Jun 16, 2005, 7:30:35 AM6/16/05
to
"Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com"

Aah, it would seem much of your "background" is of course influenced by this. :)

Im not saying that O/R mapping or CodeGen is evil, Im simply presenting and alternative. Furthermore
you should read the post I posted just before last, IMO Indy.Data is not an O/R mapper, but in fact
just a DAL, nothing more.

The code that you posted, would you mind if I include that actaul code as it seems to be real code?
I've compared output to ADO.NET and would really like to provide more comparison for users as I
do mention the syntax, but didnt have some to show. I'll even mention LLBLGen as an O/R tool for
users to consider, Indy.Data is Open Source so I have nothing to lose.

Chad Z. Hower aka Kudzu

unread,
Jun 16, 2005, 7:25:50 AM6/16/05
to
"Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
news:xn0e3kfn...@news.microsoft.com:
> oooh, a list of predicates using badly overriden operators. ('|'
> and
> '%' for example are operators of C# already). Also, this doesn't work

Of course they already exist in C#. You can only overload operators that already exist, you
cannot simply make up new ones.

As for choice - its logical to use == for equals evaluation, after all thats what it does.
Choosing << for equals makes little sense IMO.

> in VB.NET, which doesn't support operator overloading (.NET 1.x)

I noted in the article in fact that it does not work in VB.NET.

> PredicateExpression orFilter = new PredicateExpression();
> orFilter.Add(PredicateFactory.CompareValue(CustomerFieldIndex.NameFirst
> ,
>
> ComparisonOperator.Equal, "Chad"));
> orFilter.AddWithOr(PredicateFactory.CompareValue(
> CustomerFieldIndex.NameFirst, ComparisonOperator.Equal, "Hadi"));
> PredicateExpression filter = new PredicateExpression();
> filter.Add(orFilter);
> filter.AddWithAnd(PredicateFactory.CompareValue(
> CustomerFieldIndex.CustomerID, ComparisonOperator.GreaterThan,
> aMinID));
> filter.AddWithAnd(PredicateFactory.CompareNull(
> CustomerFieldIndex.Tag, true));

You make my point very clearly in fact. If you think that the code you posted is "clearer" than

xQuery.Where =


(CustomerTbl.Col.NameFirst == "Chad" | CustomerTbl.Col.NameFirst == "Hadi")

& CustomerTbl.Col.CustomerID > 100 & CustomerTbl.Col.Tag != View.Null;

Then we will never agree. If SQL were to follow your advice, I doubt SQL would have many
fans. Or lets just remove operators all together from C# and VB and use your syntax for every
day boolean expressions. :)

> The fun thing is, I can create factories for these in my app, as
> they're just objects. No need for fancy operators.

No need for succinct easy to read syntax either. :)

> This is truly not unique. My predicate objects are checked at
> compile
> time, and evaluated per predicate at runtime. It's not integrated in
> the language because it has to work with VB.NET as well.

Sorry, whats unique is the syntax, not the method. Many O/R / codegens do the same of course.

> There are a couple of O/Rmappers which use operator overloading
> for filters, though have a complete different syntax on VB.NET, which is
> IMHO unacceptable. Also, you re-used existing operators, not what I'd
> do.

Please point me to some. Everyone keeps saying they exist, but Ive yet to see one that operate in the
same way as I've demonstrated.

Frans Bouma [C# MVP]

unread,
Jun 16, 2005, 7:29:33 AM6/16/05
to
Chad Z. Hower aka Kudzu wrote:

> As per the rewrite, here are two such items that I hope will convey
> certain things in a more clear manner.
>
> Ever wished you could truly embed SQL functionality in your C# code
> without using strings or late binding? Imagine being able to write
> complex where clauses purely in C#:
>
> xQuery.Where =
> (CustomerTbl.Col.NameFirst == "Chad" | CustomerTbl.Col.NameFirst ==
> "Hadi") & CustomerTbl.Col.CustomerID > 100 & CustomerTbl.Col.Tag !=
> View.Null;
>
> Look closely. This is C# code, not SQL. Its resolved and bound at
> compile time, but evaluated at run time. In this article I shall
> provide an introduction to this method and full source code for your
> use.

As said, nothing new, nor inventive nor special.

> ---------------------


> Indy.Data provides object wrappers on a 1:1 basis for database
> tables, views, stored procedures, or SQL statements. These objects
> can be regenerated at any time to be kept in sync with database
> changes. In future articles I will be detailing how Indy.Data can be
> used to perform the same functions as a code gen or O/R mapper, but
> in a slightly different way. However for the scope of this article
> assume Indy.Data to be an advanced implementation of an ADO.NET
> command object. Where you would use an ADO.NET command object,
> Indy.Data is suitable.

Oh? Tell me, how to fetch a graph of:
10 newest customers from 'France' with their 10 latest orders and their
order lines, in the most efficient way? (so no 1+10+100 queries, but 3)

The core element why O/R mappers are flexible and nice is because they
allow you to work with the database with the entity elements and their
meta-data. So if you want to filter on orders to get the customers, you
can. No extra elements needed like a view in the db or hard-coded sql.
Just 'customer', 'order' and a filter. It's the flexibility to define
that filter right there in code without having to go back to the db to
define a view or a proc which makes O/R mappers worth using. Fetching
from 1 db element using a filter on that same object is simple, the
things which make it interesting are the join logic (left/right/inner),
sorts on fields in a related entity etc, recursive saves of a graph,
atomicly, prefetching related data in a graph, so 1 query (with filters
and sort clauses) per node etc.

FB
--
------------------------------------------------------------------------


Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com

Chad Z. Hower aka Kudzu

unread,
Jun 16, 2005, 7:48:37 AM6/16/05
to
"Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
news:xn0e3kgi...@news.microsoft.com:
> Oh? Tell me, how to fetch a graph of:
> 10 newest customers from 'France' with their 10 latest orders and
> their order lines, in the most efficient way? (so no 1+10+100 queries,
> but 3)

I think a major disconnect here is that you are treating it as an O/R, its not. Its just a datareader. Nothing
less, nothing more. So your question could easily be "Show me how do do that with
SqlDataReader" and the answer would be the same.

Indy.Data is for those working with DataReader inputs, not those looking for an O/R. Indy.Data
certainly could be used in an O/R type environment and I'll detail things on this later, but it itself
is not an O/R system any more than SqlDataReader is.

> The core element why O/R mappers are flexible and nice is because

Your trying to sell something that doesnt apply. Sure O/R's are great, but Im not adressing O/R's at
all or trying to replace them. Thats the key disconnect, that many read it and assumed it was some
sort of O/R. Maybe the title threw people off, and Im renaming it as the title covers something I
didnt get into this article as it was out of scope of the library itself.

> allow you to work with the database with the entity elements and their
> meta-data. So if you want to filter on orders to get the customers,
> you can. No extra elements needed like a view in the db or hard-coded

Now you are comparing O/R against SQL...... Indy.Data is just a C# wrapper for SQL statements and
DB tables. It is NOT an O/R, so you might as well be talking about why ADO.NET is bad and why
O/R is better.. which is fine, but it doesnt apply to what we were discussing.

Frans Bouma [C# MVP]

unread,
Jun 16, 2005, 1:14:47 PM6/16/05
to
Chad Z. Hower aka Kudzu wrote:

> "Get LLBLGen Pro, productive O/R mapping for .NET:
> http://www.llblgen.com"
>
> Aah, it would seem much of your "background" is of course influenced
> by this. :)

well, 'background' doesn't have to be between quotes, I've now worked
for 6/7 days a week for the past 3 years on that system, full time, so
I'm IMHO entitled to say a few things on the subject.

> The code that you posted, would you mind if I include that actaul
> code as it seems to be real code? I've compared output to ADO.NET
> and would really like to provide more comparison for users as I do
> mention the syntax, but didnt have some to show. I'll even mention
> LLBLGen as an O/R tool for users to consider, Indy.Data is Open
> Source so I have nothing to lose.

If you're going to use it to make my work look bad compared to your
application, of course you may not use it.

FB

--
------------------------------------------------------------------------


Get LLBLGen Pro, productive O/R mapping for .NET: http://www.llblgen.com

Frans Bouma [C# MVP]

unread,
Jun 16, 2005, 1:22:13 PM6/16/05
to
Chad Z. Hower aka Kudzu wrote:

> "Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
> news:xn0e3kg8...@news.microsoft.com:
> > that's not automatic, you have to re-run a tool. What if I
> > rename table, or a field? Then my code breaks as the wrapper class
> > gets a new name. That's not what a developer wants.
>
> Actually thats exactly what I want. One of the key issues here is
> that you have differnet methodology than I do regarding this. I dont
> want any magic thing to fix up DB changes. Its rare in our systems
> that something is so simple as a name change and whenever something
> chnages I want to know and manually fix it. When I get the rest of
> the articles online, this will make more sense. I doubt you will
> agree with the methodology either but you should be able to see the
> point of view.

and what happens with the user code added to the also generated
classes? How are these kept in sync with the newly generated classes if
I rename a table?

> >> If you say its not new - please show me another O/R that is fully
> >> early bound, filters and all. Ive seen very few, andt those that
> >> I've seen have quite an ackware syntax.
> >
> > LLBLGen Pro does, and so do a lot of others. For example Genome
>
> Via procedures yes. Which is fine if you like that syntax, but
> personally given the choice as posed in the other message, I know
> which one I prefer.

No not via procedures, I don't use stored procedures and neither does
for example Genome. All SQL is generated on the fly at runtime by
evaluating the query object graph.

> > uses a similar approach as you do (with overloaded C#
> > operators).
>
> URL on Genome please. Google finds DNA stuff. I think I've see Genome
> in the past at some point.

http://www.genom-e.com/

> > #define 'native'. SQL is set-based, C# is imperative, non-set
> > based.
>
> Sorry, in that context it would be C#. Developers are currently
> working in C# + SQL. To move towards C# only is a very good goal.

no, SQL is set-oriented. Merging two different paradigms in the same
language won't work.

> > so? You've to insert the PK side of an FK first anyway, or do
> > you require that there aren't any FK's?
>
> Yes, but its a matter of timing. Not all inserts are simple and often
> child rows must be "worked" on to prepare or calculate information.
> With identity fields you must create and repass and fix up the local
> references after (which appears to be the common solution here) or
> put in a delay between inserts while you work live.

no, first process, then send the graph for persistence.

> With sequences you lose nothing aside from the fact that you have to
> populate the field and you get the value before instead of after.

nothing changes. You have to make a roundtrip to teh db to get teh new
value, so you'll do that also at the last minute, i.e.: after
processing.

> What are the downside that you are concerned with sequences?

not much, other than that they're separate objects, which can
accidentily deleted while teh table is still there, or vice versa. I.e.
a small maintenance thing.

> > FK constraints force you to insert the PK side of the FK first
> > anyway,
>
> Its not a matter of order its a matter of timing.

heh sure...

> >> Trivial compared to the complexity identify fields inject into
> code.
> >
> > I don't see how. Unless you don't use FK constraints.
>
> Read any of the blogs here concerning how to udpate a dataset against
> an identity column table. They describe the basic issues pretty
> well, and thats just for basic simple master detail inserts.

Why would I be interested in the problems of datasets?

Robbe Morris [C# MVP]

unread,
Jun 16, 2005, 9:09:31 PM6/16/05
to
My OOP ADO.NET code generator does all this for me.
Creates a two layered approach to interfacing with the database.
Optionally returning auto-populated class arrays or DataTables/DataSets.

If I change the database, just rerun the generator. When I try to
compile, it instantly shows me where changes in my app need to be
made.

In my opinion, this type of approach is far more useful in large
scale development teams where a huge portion of the database
work is done by DBAs who don't know .NET.

http://www.eggheadcafe.com/articles/adonet_source_code_generator.asp

--
2004 and 2005 Microsoft MVP C#
Robbe Morris
http://www.masterado.net

Earn $$$ money answering .NET Framework
messageboard posts at EggHeadCafe.com.
http://www.eggheadcafe.com/forums/merit.asp

"Chad Z. Hower aka Kudzu" <cp...@hower.org> wrote in message
news:Xns967791811DE...@127.0.0.1...

Chad Z. Hower aka Kudzu

unread,
Jun 17, 2005, 1:42:41 AM6/17/05
to
"Robbe Morris [C# MVP]" <in...@eggheadcafe.com> wrote in news:#jeEMktcFHA.456
@TK2MSFTNGP09.phx.gbl:

> My OOP ADO.NET code generator does all this for me.
> Creates a two layered approach to interfacing with the database.
> Optionally returning auto-populated class arrays or DataTables/DataSets.

Exactly. Our approaches are very similar - in fact they share the same methodology with slightly
different implementations. There is another typed data reader generator out there too, but the name
slips my mind right now.

> If I change the database, just rerun the generator. When I try to
> compile, it instantly shows me where changes in my app need to be
> made.

Bingo.

> In my opinion, this type of approach is far more useful in large
> scale development teams where a huge portion of the database
> work is done by DBAs who don't know .NET.

Or in any case that the database is shared, which covers nearly every enterprise application.

Chad Z. Hower aka Kudzu

unread,
Jun 17, 2005, 9:34:22 AM6/17/05
to
"Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
news:xn0e3kpm...@news.microsoft.com:
> well, 'background' doesn't have to be between quotes, I've now
> worked
> for 6/7 days a week for the past 3 years on that system, full time, so
> I'm IMHO entitled to say a few things on the subject.

Of course. My point here is that anyone who is so fully engulfed in anything, often can see only
that. When you only have a hammer, everything looks like a nail. Ive never proposed that nails do
not exist, only that there are bolts too.

>> The code that you posted, would you mind if I include that actaul
>> code as it seems to be real code? I've compared output to ADO.NET
>> and would really like to provide more comparison for users as I do
>> mention the syntax, but didnt have some to show. I'll even mention
>> LLBLGen as an O/R tool for users to consider, Indy.Data is Open
>> Source so I have nothing to lose.
>
> If you're going to use it to make my work look bad compared to
> your
> application, of course you may not use it.

I only asked to post the same code you posted and even provide a link to LLBLGen. If you think the
code you yourself posted will make LLBLGen look bad, Im not sure there is anything I can do.

I would insert it similarly to how I did ADO.NET data reader:


"Using the built in ADO.NET commands reading from a query is as follows:"

Something like "Using a popular O/R tool like LLBLGen (link) the code appears as follows:

That failing, newsgroup postings are public, I can just link to your messages in Google.

Chad Z. Hower aka Kudzu

unread,
Jun 17, 2005, 9:56:23 AM6/17/05
to
"Frans Bouma [C# MVP]" <perseus.us...@xs4all.nl> wrote in
news:xn0e3kps...@news.microsoft.com:
> and what happens with the user code added to the also generated
> classes? How are these kept in sync with the newly generated classes
> if I rename a table?

What happens when you change a WinForm? That regenerates code and certainly does not trash
user code. What happens when you change an ADO.NET dataset and the code regenerates?

1) Code can USE the generated code. That wont get trashed.

2) Code can inherite from teh base classes and add functionality.

If you rename a table, in fact I *want* it to break my code so I can see and manually manage the
impact.

>> Via procedures yes. Which is fine if you like that syntax, but
>> personally given the choice as posed in the other message, I know
>> which one I prefer.
>
> No not via procedures, I don't use stored procedures and neither
> does

Sorry, wrong terminology. Procedure as in method, not as in stored procudure. Procedure is an old
Pascal term thats stuck in my head. Your example LLBLGen code demonstrated my point perfect
of how its quite different than a SQL predicate. If thats what you want, then its great.

>> URL on Genome please. Google finds DNA stuff. I think I've see Genome
>> in the past at some point.
>
> http://www.genom-e.com/

Well I looked at Genome. Interesting stuff - but its yet a different beast. Its closer to ECO etc. It
goes from object, to the database and wants to control the structure itself. Good for systems that
only are used by that subsystem, but not so good for existing databases or for databases that will
be used by other clients as well. Genome does look to be implemneted quite well though, and
certainly has some provisions for this even.

But here again we disconnect.

1) Either Im missing it - which I'll show code below...
2) Or you didnt look to closely at Genome.

http://www.genom-e.com/Default.aspx?tabid=72
dd.Extent(typeof(Person))["Name.IsLike({0})", Name];

Loosely typed string, only evaluated at run time. Same problem as embedding SQL strings in
code. At least its an attribute, but unless Genome scans these all at compile time and evaluates
them, then they are still left to cause undetected bugs at runtime. Does it walk the assembly and
verify all of these on each and every run at initialization or at build time? If so, does it pinpoint
the error for you to the line?

Since LLBLGen focuses on using methods for predicates, I assume we agree on the problems
introduced by this.

But back to the point. I found no reference what so ever to what you have claimed, that Genome
implements predicates like what I have shown. The closest I can find at all is its set
management, which is in fact a very neat idea, but something totally different.

In fact I find:
result.Add(new NumberManagedPersonFilter(peopleManagedMin,true))

etc. There is the *= to merge sets, but again, someting completely different.

In fact:
"Genome introduces type-safe dynamic query building using set algebra. Queries with their
arguments can be declared as methods and properties of the domain model, which can be
combined and reused in a strongly typed manner."

Methods and properties - in fact closer to LLBLGen.

>> Sorry, in that context it would be C#. Developers are currently
>> working in C# + SQL. To move towards C# only is a very good goal.
>
> no, SQL is set-oriented. Merging two different paradigms in the
> same language won't work.

The results are set oriented, and C# can manage sets as well. In fact, look at C Omega's SQL
examples. And read Anders H's latest discussions about SQL and C#. Your statement is at direct
odds with Ander's statements.

Anders H:
mms://wm.microsoft.com/ms/msnse/0406/22899/Vision_of_Future.wmv

C Omega's use of SQL:
http://research.microsoft.com/Comega/doc/comega_tutorial_select_sql_database.htm

In fact - I did this well before COmega, but it looks very very familiar... Of course they have the
ability to directly extend the language, I dont.

> nothing changes. You have to make a roundtrip to teh db to get
> teh new
> value, so you'll do that also at the last minute, i.e.: after
> processing.

Yes you still need a round trip. But with idenities you very often end up recylcling through existing
sets to update the keys.

There is not a big difference between identities and sequences. Identites are not evil - sequences
are just more flexible, in some cases can eliminate bottlebecks, and are certainly easier to port
across DB's than identies are.

>> What are the downside that you are concerned with sequences?
>
> not much, other than that they're separate objects, which can
> accidentily deleted while teh table is still there, or vice versa.
> I.e. a small maintenance thing.

Yes, this point I agree on.

>> Read any of the blogs here concerning how to udpate a dataset against
>> an identity column table. They describe the basic issues pretty
>> well, and thats just for basic simple master detail inserts.
>
> Why would I be interested in the problems of datasets?

Because its a common scenario that is not unique to datasets. If you are on the DB side 100% and
dont care about your users, only what makes a DBA's job easier, then identies could be considered
better.

Chad Z. Hower aka Kudzu

unread,
Jun 17, 2005, 10:06:30 AM6/17/05
to
"Chad Z. Hower aka Kudzu" <cp...@hower.org> wrote in
news:Xns9678AC52163...@127.0.0.1:

In fact, if we look at some exmample C Omega code:

struct {
SqlString CustomerID;
SqlString ContactName;
}* res
= select CustomerID, ContactName from DB.Customers;

foreach( row in res ) {
Console.WriteLine("{0,-12} {1}", row.CustomerID, row.ContactName);
}

It looks very much like Indy.Data:

[Select("select `CustomerID`, `ContactName` from `Customers`")]
public class CustomerRow : View.Row {
public DbInt32 CustomerID;
public DbString ContactName;
}
public void InlineSQLTest() {
using (Query xCustomers = new Query(_DB, typeof(CustomerRow))) {
foreach (CustomerRow xCustomer in xCustomers) {
Console.WriteLine("{0,-12} {1}", row.CustomerID, row.ContactName);

Christian Hassa [TechTalk]

unread,
Jun 27, 2005, 9:39:53 AM6/27/05
to
Chad,

Thanks for involving Genome in your heated dicussion. I don't want to
interfere, but the statements about Genome in this context are wrong:

> Well I looked at Genome. Interesting stuff - but its yet a different
> beast. Its closer to ECO etc. It
> goes from object, to the database and wants to control the structure
> itself. Good for systems that
> only are used by that subsystem, but not so good for existing databases or
> for databases that will
> be used by other clients as well. Genome does look to be implemneted quite
> well though, and

Genome supports both existing databases as well as green-field development.
Of course, your development workflow depends on, whether you have a db
already or not. But with Genome you can map existing databases pretty well.
Genome is *very flexible* in tolerating existing database structures such as
composite PK, FK, no (in non-polymorphic tables) or multiple type
identifiers and things like database-generated identities or FK-constraints.
I wonder whether Indy can tolerate FK-constraints when updating object
graphs (if it can do that at all) or database-generated identities.

> 1) Either Im missing it - which I'll show code below...
> 2) Or you didnt look to closely at Genome.
>
> http://www.genom-e.com/Default.aspx?tabid=72
> dd.Extent(typeof(Person))["Name.IsLike({0})", Name];
>
> Loosely typed string, only evaluated at run time. Same problem as
> embedding SQL strings in
> code. At least its an attribute, but unless Genome scans these all at
> compile time and evaluates
> them, then they are still left to cause undetected bugs at runtime. Does
> it walk the assembly and
> verify all of these on each and every run at initialization or at build
> time? If so, does it pinpoint
> the error for you to the line?

You got it all wrong: in your quote you are referring to Genome client-side
OQL, which is still strongly typed, but late-bound - so you are right, an
error in the string would be detected only during run-time. Still we are
type-safe, Genome would tell you about a wrong OQL string as soon as you are
applying it to the Set, no matter whether you actually fire the query
against the database or not. Also parameters, passed in to the query (like
the Name in your example) are strongly typed and will be checked when
executing the query during runtime.

However, it seems like you haven't read the whole story and how Genome
supports decomposability of queries: usually queries in Genome are written
in the mapping file and linked to methods of your classes. These queries are
then compiled by the Genome data domain schema compiler and you will get
errors already during *compile time* of your project. 95% of the queries in
a Genome application should be written in the mapping file for this and
other reasons. Mapped Genome OQL-queries can be even compiled into
user-defined functions on SQL server for expressing server side recursive
queries.

Imagine you have domain model where a Person can have open Requests that are
tracked.
For example you can define a query for retrieving open requests of a Person
object:
<Type name="Person">
...
<Member name="OpenRequests" oql="extentof(Request)[r: r.Requestor==this &&
r.Status==Status.Open]"/>
...
</Type>

And you can reuse this query for enumerating all Employees with open
Requests of the company:
<Type name="Company">
...
<Member name="Employees" oql="extentof(Person)[p: p.EmployedAt==this]"/>
<Member name="EmployeesWithOpenRequests"
oql="Employees[!OpenRequests.IsEmpty]"/>
...
</Type>

All that is type-safe and checked during compile time by the Genome
datadomain schema compiler, which integrates into the Visual Studio.NET
build process or whatever build tool you use.


> But back to the point. I found no reference what so ever to what you have
> claimed, that Genome
> implements predicates like what I have shown. The closest I can find at
> all is its set
> management, which is in fact a very neat idea, but something totally
> different.
>
> In fact I find:
> result.Add(new NumberManagedPersonFilter(peopleManagedMin,true))
>
> etc. There is the *= to merge sets, but again, someting completely
> different.

*= or Set.Intersect does exactly that. You can have a set restricted by
predicate1 and a second set restricted by predicate2. Intersecting these
sets will combine the predicates. Actually, you can not only "add" a simple
predicate to a Set, but you can combine two Sets which can be made up each
of an arbitrary deep Set of their own predicates (including subqueries,
etc.).

On the other hand, Genome Sets are much more versatile then the
query-objects (which is not really a new pattern, see e.g.
http://www.martinfowler.com/eaaCatalog/queryObject.html) you provide - at
least from what I have seen in the examples. Sets can be used in other
expressions for sub-queries (translating into EXIST) and for counting the
elements of a Set (translating into SELECT COUNT). You can fully project
sets from one element type to another and still intersect projected sets
(which have been transformed in the same way).

I didn't explore the boundaries and limitations of your query model, all I
can say is that we are able to express the most complicated SQL queries you
can imagine in OQL - but with all the benefits of a decomposed, strongly
typed query language. Genome allows you to build queries dynamically during
runtime in a strongly-typed manner, by using set operations with already
mapped expressions. There is no strings being parsed during run-time, unless
you write client-side OQL in C#, which is a rare exception in Genome.

Besides that Genome provides a lot more services, which have not been
discussed in this thread, but are crucial to real-world development
projects, such as:

+) transaction management
+) locking strategies
+) caching
+) injection of retrieval strategies into queries
+) transformation into data transfer objects
+) customizable persistence events

I don't want to criticize your data framework, however, if you compare with
others, you should only do so if you take the time to understand the
according concepts of the compared product. Genome is not an effort or hobby
of a single individual but a full blown O/R mapping framework developed by a
team since several years. It has been deployed already in large scale
enterprise projects and is continously extended and adapted to new
technology available (such as .NET 2.0).

Regards,
Christian Hassa
http://www.genom-e.com


Chad Z. Hower aka Kudzu

unread,
Jun 28, 2005, 5:14:26 AM6/28/05
to
"Christian Hassa [TechTalk]" <ch_NO...@techtalk.at_NO_SPAM> wrote in
news:e$Sxf3xeF...@TK2MSFTNGP09.phx.gbl:
> Thanks for involving Genome in your heated dicussion. I don't want to
> interfere, but the statements about Genome in this context are wrong:

Please, corrections are welcome. In fact I have some upcoming articles about solutions to some
of the other ideas I discussed:
http://www.codeproject.com/useritems/DudeWheresMyBusinessLogic.asp

Id like to mention Genome and others, and even provide some sample codes if possible. I had
originally mentioned it in the draft of this article but deciced to move it to later. Id like to mention
LLBLGEn too, but the author seems uncomfortable of any even slight comparison of his
product, even when using his own code or code he could submit to me.

> database-generated identities or FK-constraints. I wonder whether Indy
> can tolerate FK-constraints when updating object graphs (if it can do
> that at all) or database-generated identities.

It can - but in a different way. Indy.Data is a much lower level tool thatn Genome. Genome is a
much bigger tool designed to cover serveral concepts while Indy.Data is designed to only cover one
specific portion.

Indy.Data is designed to replace the ADO.NET datareaders (and a few other objects). Its a type
safe early bound version of ADO.NET data access if you will.

> You got it all wrong: in your quote you are referring to Genome
> client-side OQL, which is still strongly typed, but late-bound - so
> you are right, an error in the string would be detected only during
> run-time. Still we are type-safe, Genome would tell you about a wrong
> OQL string as soon as you are applying it to the Set, no matter
> whether you actually fire the query against the database or not. Also
> parameters, passed in to the query (like the Name in your example) are
> strongly typed and will be checked when executing the query during
> runtime.

Its the late binding I dont like and an area that Indy.Data really works to eliminate.

> However, it seems like you haven't read the whole story and how Genome
> supports decomposability of queries: usually queries in Genome are
> written in the mapping file and linked to methods of your classes.
> These queries are then compiled by the Genome data domain schema
> compiler and you will get errors already during *compile time* of your
> project. 95% of the queries in a Genome application should be written
> in the mapping file for this and other reasons. Mapped Genome
> OQL-queries can be even compiled into user-defined functions on SQL
> server for expressing server side recursive queries.

Aah, thats good then. Thats similar to how Indy.Data handles its SQL inputs.

> On the other hand, Genome Sets are much more versatile then the
> query-objects (which is not really a new pattern, see e.g.
> http://www.martinfowler.com/eaaCatalog/queryObject.html) you provide -

Does he have an actual impementation? or just discusses the possibility? Thish page is a bit bare.

> at least from what I have seen in the examples. Sets can be used in
> other expressions for sub-queries (translating into EXIST) and for
> counting the elements of a Set (translating into SELECT COUNT). You
> can fully project sets from one element type to another and still
> intersect projected sets (which have been transformed in the same
> way).

I did find how the sets are put to use in Genome rather interesting, but doesnt apply to Indy.Data.
Genome is OQL based, while Indy.Data again just strives to be a typed early bound version of
ADO.NET datareaders.

> I didn't explore the boundaries and limitations of your query model,
> all I can say is that we are able to express the most complicated SQL
> queries you can imagine in OQL - but with all the benefits of a

I dont doubt that. Im sorry if I implied that.

I appreciate your input and feedback on corrections regarding Genome and look forward to further
discussions.

Chad Z. Hower aka Kudzu

unread,
Jun 28, 2005, 5:19:11 AM6/28/05
to
Additional info:

"Unlike other object-relational mapping tools available on the market, Genome does not focus
solely on CRUD operations and pre-defined relationship models, which only cover the entity-
relational mapping aspect of O/R mapping"

Indy.Data in fact solely focuses on the CRUD if you will. :) Because its not an O/R tool, which I
tried to explain to the other chap here earlier.

Ive got some demos I'll be posting soon that will show things a bit better soon. Genome maps Objects
to the DB (O/R mapper) while Indy.Data puts an object interface DIRECTLY on DB objects. Indy.Data
does not itself provide any provision for O/R mapping. I've used it for such, but Indy.Data itself
does not do it.

0 new messages