Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Re: I think that relational DBs are dead. See link to my article inside

3 views
Skip to first unread message

Ed Prochak

unread,
Jul 3, 2006, 1:20:51 PM7/3/06
to

Dmitry Shuklin wrote:
> Hi David,
>
> I understand that You protect your favorite technology. RDBMS is a good
> DB but not prefect. For example, a table is just a special case of a
> graph or network. So network databases can do all that can do RDB. But
> RDB can't do all that can do NDB.

Give just ONE example. I sincerely doubt there is anything you can do
in a network model DB that cannot be done at least as well in a
Relational model DB.


> Even I by my self can make research implementation of network DB which
> successfully emulates RDBMS. I have even implemented object
> identification synonymy/homonymy conception and undo/redo transactions.
> Everybody can download it from
> http://www.shuklin.com/ai/ht/en/cerebrum/ and see results of my
> experiments. So i fully assured that i am right in general view. Of
> course some small special ideas can be wrong.
>
> WBR,
> Dmitry Shuklin, Ph.D

Sorry, but all I see on that page is a couple claims, no supporting
data. I will not download some unknown executable. Make a case without
having us run your program for you.

Example how did you conclude:
"the usage of relational database systems to solve this kind of
problems is not applicable" ?

I added comp.databases.theory as that is the group you should likely be
talking to.


Ed

Dmitry Shuklin

unread,
Jul 3, 2006, 1:44:43 PM7/3/06
to
Hi,

> Give just ONE example. I sincerely doubt there is anything you can do
> in a network model DB that cannot be done at least as well in a
> Relational model DB.

Trees )) I think You understand what I mean. Of course on the same
abstraction level as the relational model works. You can emulate trees
on RMD. But it will cause more abstraction levels to appear.

In fact i am interested in emulation of artificial neural network.
Making ANN with SQL - ha ha ha.

> Sorry, but all I see on that page is a couple claims, no supporting
> data. I will not download some unknown executable. Make a case without
> having us run your program for you.

Sorry, i don't have any artiles on English describing my OODB research
yet (((
And even when you download zip you can find there only C# sources. no
documentation (((

I know, i know (((

What differ my DB from the rest? :

- one object can have a many ObjectIDs
- one ObjectID can address many different object instances
- multilevel undo/redo transactions are supported

What restrictions current version has?
- only single user mode.
- only single thread.


WBR,
Dmitry

Cimode

unread,
Jul 3, 2006, 1:54:08 PM7/3/06
to
3 simple questions...

What exactly is the purpose of your revolutionnary technology?
What kind of complexity is your technology able to handle when it comes
to data types?
How do you derive values from domains of values in an ensemblist
perspective (for instance odd integers from integers)?

Bob Badour

unread,
Jul 3, 2006, 3:52:04 PM7/3/06
to
Dmitry Shuklin wrote:

> Hi,
>
>
>>Give just ONE example. I sincerely doubt there is anything you can do
>>in a network model DB that cannot be done at least as well in a
>>Relational model DB.
>
> Trees )) I think You understand what I mean. Of course on the same
> abstraction level as the relational model works. You can emulate trees
> on RMD. But it will cause more abstraction levels to appear.

Emulate? How exactly does the transitive closure operation emulate? How
exactly do value-based references emulate?


> In fact i am interested in emulation of artificial neural network.
> Making ANN with SQL - ha ha ha.
>
>
>>Sorry, but all I see on that page is a couple claims, no supporting
>>data. I will not download some unknown executable. Make a case without
>>having us run your program for you.
>
>
> Sorry, i don't have any artiles on English describing my OODB research
> yet (((
> And even when you download zip you can find there only C# sources. no
> documentation (((
>
> I know, i know (((
>
> What differ my DB from the rest? :
>
> - one object can have a many ObjectIDs
> - one ObjectID can address many different object instances

In short, no logical identity whatsoever. Sounds, um, charming. ::rolls
eyes::


> - multilevel undo/redo transactions are supported

Wee!


> What restrictions current version has?
> - only single user mode.
> - only single thread.

So, can we assume it fully supports join, project, extend, union,
intersect, transitive closure, restrict, the existential quantifier and
the universal quantifier? Or do you not consider the lack of any of
those 'restrictions'?

Neo

unread,
Jul 3, 2006, 4:20:10 PM7/3/06
to
> I sincerely doubt there is anything you can do in a network model DB that cannot be done at least as well in a Relational model DB. Give just ONE example.

I am not sure if this would qualify, but I would like to see an
equivalent RMDB implementation (and also how resilient it is to
handling additional data that is unknown during initial db design) for
the example posted at www.dbfordummies.com/example/ex039.asp

This example represents a Food Judging Contest. There are three
persons. The first named john (aka johnathan) is a judge. The second
named john (aka johnny) is a contestant. The third whose name is
unknown is a spectator and his age is 28.

There are four food entries. The first is named leftOver1 which is soft
and spicy The second is named apple1 which is crunchy and sweet. The
third is named broccoli1 which is crunchy. The fourth is named tomato1
which is soft, sweet and sour.

Judge john likes leftOver1 and tomato1. Contestant john likes apple1
and tomato1. Spectator likes broccoli1 and tomato1. In addition, judge
john likes contestant john.

There are a number of queries such as:
What entries does judge john (aka johnathan) like?
Which fruit entries does contestant john (aka johnny) like?
Which vegetable entries does spectator (of age 28) like?
Which fruit/vegetable entries johnathan likes?
Tomato1 is liked by who?
Which persons who likes crunchy vegetables?
Which person likes something that is both a fruit and vegetable?
Which entry do judges, contestants and spectators like?
Which person likes another person who likes a fruit/vegetable entry?
Which person likes something which likes something that is soft,
sweet/sour?

Additional details are documented in comments throughout the script.

Neo

unread,
Jul 3, 2006, 4:27:03 PM7/3/06
to
> I sincerely doubt there is anything you can do in a network model DB that cannot be done at least as well in a Relational model DB. Give just ONE example.

in a Relational model DB. Give just ONE example.

I am not sure if this would qualify, but I would like to see an
equivalent RMDB implementation (and also how resilient it is to
handling additional data that is unknown during initial db design) for

the example posted at www.dbfordummies.com/example/ex012.asp

In general, this example could represent the following:
There are 1 to many courthouses.
A courthouse has 1 to many floors.
A floor has 1 to many rooms.
Persons in the courthouse can have 1 to many classifications
(ie judge, clerk, coordinator, bailiff, court reporter, assistant,
etc).
A person can have 0 to many names, phone numbers and emails.
A person can have 0 to many bosses/employees.
The location of a person can be specified by a building, floor, room,
etc.

More specifically, this example represents:

Judge Judy has the following employees:
clerk Clark, coordinator Colby, bailiff Brandy and court reporter
Courtney.
Assistant clerk Ashley is an employee of Clark and Colby.
Each person has various attributes some with multiple values.

Courthouse1 has two floors.
It's first floor has room1.
It's second floor has room1 and room2.
Above persons are located in different parts of Courthouse1.

The first figure displays Courthouse1's parts.
The second figure displays attributes of various persons.
The third figure displays inverted isIn/boss hierarchies with Ashley as
the root.

There are a number of queries such as:

Find all persons.
What does Courthouse1 have?
Find Ashley's bosses.
Find a judge's employee whose employee phone# is 737-5588)
Courthouse1's floor2's room2 has which person that has an employee
names Clark and an employee whose email is co...@msn.com.

Additional details documented in script's comments.

Neo

unread,
Jul 3, 2006, 4:31:38 PM7/3/06
to
> I sincerely doubt there is anything you can do in a network model DB that cannot be done at least as well in a Relational model DB. Give just ONE example.
in a Relational model DB. Give just ONE example.

I am not sure if this would qualify, but I would like to see an
equivalent RMDB implementation (and also how resilient it is to
handling additional data that is unknown during initial db design) for

the example posted at www.dbfordummies.com/example/ex117.asp

This example models a real estate listing. It models a $200,000
single-family house with MLS# A2868Z. The house has 3 bedrooms. The
master bedroom is 25x30 and has biege Dupont carpet that was installed
1/1/2000. The second bedroom is 12x15 and has pink and purple carpet.
The third bedroom is 12x15 and has hardwood flooring that was installed
1/2/1990 and needs resurfacing. The house has 3 bathrooms. The master
bathroom has brass finished Moen faucets. The second is a hall bathroom
and the third a half bathroom. The house has a 2-car attached garage.
The house has 2 fireplaces, the first is made of brick and its hearth
is made of stone. The second fireplace is made of stone and its hearth
is made of stone also. The 15x20 kitchen has cork flooring and the
following appliances: a white Maytag dishwasher, an Amana electric
range, and a Sears side-by-side fridge that is brand new.

Neo

unread,
Jul 3, 2006, 4:43:21 PM7/3/06
to
> I sincerely doubt there is anything you can do in a network model DB that cannot be done at least as well in a Relational model DB. Give just ONE example.

I am not sure if this would qualify, but I would like to see an
equivalent RMDB implementation (and also how resilient it is to
handling additional data that is unknown during initial db design) for

the example posted at www.dbfordummies.com/example/ex123.asp

This example models 10 computer systems, each quite different than the
other.

The first system has a computer and 7.1 speaker system that handles a
peak of 1000 watts. The Dell computer has 20 GB IDE hard drive and a
133 Mhz motherboard with dual 2.0 GHz processors. The motherboard also
has 3 slots, the first has a 2/10 MBit network card, the second an
audio card with 3 sampling rates, and the third is empty. See script
for additional specifications for first and remaining systems.

Example queries:
Find computer that has a mother board which has a cpu whose serial# is
VR3736.
Find systems whose hard drive is manufactured by quantum.

Neo

unread,
Jul 3, 2006, 4:52:05 PM7/3/06
to
> I sincerely doubt there is anything you can do in a network model DB that cannot be done at least as well in a Relational model DB. Give just ONE example.

I am not sure if this would qualify, but I would like to see an
equivalent RMDB implementation (and also how resilient it is to
handling additional data that is unknown during initial db design) for

the example posted at www.dbfordummies.com/example/ex121.asp The
example models three persons. Each has different attributes and
attributes have different number of values. See script for details.

Example queries:
Find faxes for persons whose height is 60 inch.
Find zips for teachers whose weight is 135 lb.
Find persons who address's county is willis and region is yokohama.

Neo

unread,
Jul 3, 2006, 5:04:44 PM7/3/06
to
> I sincerely doubt there is anything you can do in a network model DB that cannot be done at least as well in a Relational model DB. Give just ONE example.

This probably doesn't qualify, but I would like to see an equivalent


RMDB implementation (and also how resilient it is to handling
additional data that is unknown during initial db design) for the

example posted at www.dbfordummies.com/example/ex007.asp This example
creates two hierarchies with john and mary. In the first hierarchy,
john and mary are children of adam and eve who are children of god. In
the second hierarchy, john and mary are part of mars and venus
respectively which are part of the universe..

Example queries:
Find eve's children.
Find john's parents.
Find adam's children that are part of mars.
Find eve's children that are part of a planet.
Find god's grandchildren that are part of a planet.

Neo

unread,
Jul 3, 2006, 5:23:37 PM7/3/06
to
> I sincerely doubt there is anything you can do in a network model DB that cannot be done at least as well in a Relational model DB. Give just ONE example.

I am not sure if this qualifies as anything, but I would like someone
to demonstrate that RMDBs handle new unknown data requirements with as
little impact to existing schema/queries/code as network based dbs. To
verify this, we might:

1) Have someone specify some data whose structure was previously
unknown.
2) Have the dbs store data.
3) Create some basic queries.
4) Repeat above steps and compare impact on existing schemas/queries.

Cimode

unread,
Jul 3, 2006, 5:27:35 PM7/3/06
to
Hi Neo

Are you having a verbal diarrhea? One of the examples you posted would
have been sufficient to demonstrate the nonsense stated...no need to
waste time on that...:)

Dmitry Shuklin

unread,
Jul 3, 2006, 5:45:37 PM7/3/06
to
Hi,

Bob Badour wrote:

> > What differ my DB from the rest? :
> >
> > - one object can have a many ObjectIDs
> > - one ObjectID can address many different object instances
>
> In short, no logical identity whatsoever. Sounds, um, charming. ::rolls
> eyes::

Not yet. Logical identity exists. But it strong only for each context.
>From one context to anoter identity not strong and all depends from
developer. When I am doing experiments i found that it is ok to make
almost all identification in all contexts equal. but not in all
contexts. let take object 1000 for example. In global context (sector)
it represent meta descriptor to attribure 'name'. if some object has a
name then object has an attribute 1000. but attribute 1000 also has its
own name. so object 1000 has attribute 1000. if you dereference OID
1000 in global context you got instance of AttributeDescriptor for
attribute 'Name'. If you dereference OID 1000 in some object - you got
name of this object. If you dereference OID 1000 inside context of OID
1000 then you got string 'Name'.

So in each context OID 1000 references different instances.
Each context == object instance.


> So, can we assume it fully supports join, project, extend, union,
> intersect, transitive closure, restrict, the existential quantifier and
> the universal quantifier? Or do you not consider the lack of any of
> those 'restrictions'?

it is very interesting question. i can't answer yes or no. may be right
answer is 'this question is not applicable to my DB' but it is
uninteresting answer. Let i try to describe what i have implemented and
you decide for youself do i support this or not.

at first my DB is written for Microsoft .NET and supports objects
written on any managed language. it support 2 modes of serialization.
solid and structured. I will describe only structured because solid
saves object as one unbreakable peace.

structured serialization gives programmer API which he can use to
serialize object attributres. for this mode class should not have any
fields. class can have only methods. all data will be stored in DB via
API. it is some kind like a ViewState in ASP.NET

classes stores attributes into network DB. i don't write yet anoter
relational DB. So data storage based on anoter conception. it don't has
a tables. and it don't support relations as subset of Cartesian
crossproduct. if you want to compare my DB with RDB then you should
assume that relation is a one table row (not a whole table). In this
case row is subset of Cartesian crossproduct from 'set of all column
names' on 'set of all values'. yea it is too heretic )))

so it doesn't support all conception that you asked about exactly as it
RDB does. but you can use C# and do what you want. from this point of
view you can assume it fully supports all of them.

i forget for yet another big restriction. i don't support any
declarative language. only imperative. C# for example is supported.

it is interesting that possibility for one ObjectId address many
instancess allow me to make object joins or aggregates. lets assume
that we have collection of objects C1 which all implements interface
I1. then we change requirements and want to make view from objects
which has the same IDs but supports I2 : I1 let assume that I1 has one
property CompanyObjectID and we want to have a I2 with additional
CompanyName. It is ok.
We should make new object collection. Then we should make new class C2
. Implement I2 and add instances of C2 into new collection with
original IDs. let continue. C1 has attribute CompanyObjectID. we can
map this attribute to C2. And each C2 will have access to the same
attribute instance of CompanyObjectID as C1. And when we will change
property CompanyID in some C2 instance then property from corresponding
C1 instance will changed too. then CompanyName can be implemented
(pseudo code) as public string CompanyName { get { return
global.dereference(this.CompanyId).Name; }}

it is anoter way to do the same. we can map each Company.Name attribute
as attribute CompanyName to each C1. and implement C1 : I2
C1 will think that he accessed its own attrribute but will access
attribute of another object. small workaround is needed here - we
should override CompanyObjectID set and remap attriburtes as needed. In
previous sample this overriding is not needed.

unions - it is very simple. just add as many instances to some
collecion as you want.

and e.t.c.

I have plans to implement some kind of declarative system. but don't
need it now. as i describe i am specializing in neural networks and
natural language processing (by ANNs) so i just don't need this
functionality now and don't want to spend my time on it.


WBR,
Dmitry

Dmitry Shuklin

unread,
Jul 3, 2006, 5:56:51 PM7/3/06
to
Hi Cimode

> What exactly is the purpose of your revolutionnary technology?

Primary purpose - modeling neural system with up to 2000000000 neurons.
1 neuron == 1 object instance.

> What kind of complexity is your technology able to handle when it comes
> to data types?

I don't fully understand this question. I support .NET data types.
structures, classes, inheritance, virtual methods, interfaces. I DON'T
support delegates and events.


> How do you derive values from domains of values in an ensemblist
> perspective (for instance odd integers from integers)?

I don't understand question. Do you mean domain==type? if so i support
.NET types. i support 32 bits object identifiers. it gives 2~31 (no
last bit) instances as theoretical limit. I supoort Int8, Int16, Int32
and Int64. or you mean domain==System.AppDomain ? ))))

WBR,
Dmitry

Dmitry Shuklin

unread,
Jul 3, 2006, 6:00:58 PM7/3/06
to
Hi Cimode

> What exactly is the purpose of your revolutionnary technology?

and second - i want to make DB as general purpose DB.

WBR,
Dmitry

Bob Badour

unread,
Jul 3, 2006, 8:00:15 PM7/3/06
to
Dmitry Shuklin wrote:

In short, no. What you have implemented is mind-numbingly restricted and
feeble.

[longwinded no snipped]

Bill Karwin

unread,
Jul 3, 2006, 7:40:19 PM7/3/06
to
Dmitry Shuklin wrote:
>> What exactly is the purpose of your revolutionnary technology?
>
> Primary purpose - modeling neural system with up to 2000000000 neurons.
> 1 neuron == 1 object instance.

Sounds very nice, but it is not a sufficient reason to call for the
death of relational databases.

Bill K.

Dmitry Shuklin

unread,
Jul 4, 2006, 2:27:04 AM7/4/06
to
Hi Bill,

Of course it is absolutely no reason to call for RDB death just because
i have my DB implemented. ))) As you can see earlier i had referenced
this my experiment to show why i think that network object oriented
databases can do all that RDB can. And what NOODB can do what RDB
can't. It is just an example of non-relational features. Of course i
can't kill Oracle or Microsoft by my experimental application ))) But i
really belive that far future belongs to network databases, not to
table-based.

WBR,
Dmitry

Dmitry Shuklin

unread,
Jul 4, 2006, 2:33:06 AM7/4/06
to
Hi Bob

> In short, no. What you have implemented is mind-numbingly restricted and
> feeble.

Why ?)) What database based on described conception can't do ? Even
making application is more comfortable because object relation mapping
not needed at all.

WBR,
Dmitry

Erwin

unread,
Jul 4, 2006, 3:53:31 AM7/4/06
to
> So, can we assume it fully supports join, project, extend, union,
> intersect, transitive closure, restrict, the existential quantifier and
> the universal quantifier? Or do you not consider the lack of any of
> those 'restrictions'?

Not to forget Multiple Assignment :-)

Cimode

unread,
Jul 4, 2006, 4:10:26 AM7/4/06
to

Dmitry Shuklin wrote:
> Hi Cimode
>
> > What exactly is the purpose of your revolutionnary technology?
>
> Primary purpose - modeling neural system with up to 2000000000 neurons.
> 1 neuron == 1 object instance.
So you are saying that a specific implementation is the end of a entire
logical model based on applied mathematics (as a reminder, some people
have worked for more than 40 years onto creating RM). Don't you think
this is hasty?

> > What kind of complexity is your technology able to handle when it comes
> > to data types?

A data type for instance is integer, binary, text, anything you can
think of....
Good support for data type is the ability for instance to apply
specific querying operators on the data belonging to that data type.
Can you create a data type *neuron* that you can manipulate at wish?
for instance lets say neuron has a property *wavelength*...can you you
find all neurons with specific wavelength? superior to specific
wavelength? What capability of operations involving neurons can your
system handle? What kind of behavior do you aim at tracking in storing
neurons?

> I don't fully understand this question. I support .NET data types.
> structures, classes, inheritance, virtual methods, interfaces. I DON'T
> support delegates and events.
>
>
> > How do you derive values from domains of values in an ensemblist
> > perspective (for instance odd integers from integers)?
> I don't understand question. Do you mean domain==type? if so i support
> .NET types. i support 32 bits object identifiers. it gives 2~31 (no
> last bit) instances as theoretical limit. I supoort Int8, Int16, Int32
> and Int64. or you mean domain==System.AppDomain ? ))))

I see...
domain are RM counterpart of mathematical ensembles of values. For
instance, let's say you have ensemble of values (neuron1, neuron2,
neuron3, neuron4, neuron5). Can you define a data type involving only
(neuron2, neuron4, neuron5) ? How? When(at execution time? compile
time?)

> WBR,
> Dmitry

Bill Karwin

unread,
Jul 4, 2006, 4:05:16 AM7/4/06
to
Dmitry Shuklin wrote:
> But i really belive that far future belongs to network databases, not to
> table-based.

Well, a jackhammer can probably drive nails into a piece of wood, too.
That doesn't mean it's the right tool for that job.

Good luck with your research, in any case! Innovation pushes things
forward. But not all innovation results in the technology we think it will.

Bill K.

Cimode

unread,
Jul 4, 2006, 8:11:17 AM7/4/06
to
Implementing an elementary storage retrieval mechanism for targetted
purpose is one thing and does not allow to claim it makes a logical
model for data management. Implementing a general purpose DBMS creates
a need to answer several logical questions about operations, data
integrity, correctenedd etc...Check other posts for what kind of
operations and characteristics your technology needs to support to be
sound...Good luck ;)

> WBR,
> Dmitry

Cimode

unread,
Jul 4, 2006, 8:35:09 AM7/4/06
to
Just so that you know *table based* is not the same as *relational*.
In most current implementations, it is actually a total antithesis for
relational principle of independence between logical and physical
layers. Keep in mind, they are currently NO relational DBMS
implemented mainly SQL DBMS (ORACLE, DB2, SQL Server....)...

Bob Badour

unread,
Jul 4, 2006, 8:51:25 AM7/4/06
to
Dmitry Shuklin wrote:

Look, if you are completely ignorant of the last 50 years of computing,
I have no intention of trying to educate you in a usenet post.

You are focusing on structure to the exclusion of integrity and
manipulation. That's just a dumb mistake.

Bob Badour

unread,
Jul 4, 2006, 8:53:28 AM7/4/06
to
Bill Karwin wrote:

What the hell do you think is innovative about re-inventing something
that informed people abandoned 30 years ago?

Dmitry Shuklin

unread,
Jul 4, 2006, 11:11:32 AM7/4/06
to
Hi Cimode

> > Primary purpose - modeling neural system with up to 2000000000 neurons.
> > 1 neuron == 1 object instance.
> So you are saying that a specific implementation is the end of a entire
> logical model based on applied mathematics (as a reminder, some people
> have worked for more than 40 years onto creating RM). Don't you think
> this is hasty?

Why you think that i am ignoring these 40-60 years? Or why you think
that i spent few days to my research? )) Of course i am using all that
i can use. But RM is not unique model which can be used.

> Good support for data type is the ability for instance to apply
> specific querying operators on the data belonging to that data type.

I am not implementing complete and independent DB. My application is
based on Microsoft .NET Framework types library. I don't make my own
programming language. C# is supported. My OODB is a dll which can be
used from .NET So it is inherited all specific operations from standart
.NET types. And all .NET functionality can be used when application
working with DB. But inside persistent classes delegates and events are
not supported.

> Can you create a data type *neuron* that you can manipulate at wish?

Of course. Just need to create yer anoter .NET class.

> for instance lets say neuron has a property *wavelength*...can you you
> find all neurons with specific wavelength? superior to specific
> wavelength?

yes, but i don't support indexes for properties. current version
supports only ObjectIDs index. so if needed search between count of
objects you should use your own index implementation. It is very easy
to use System.Collection.Hashtable and serialize it as part of some
persistent object, for example.

> What capability of operations involving neurons can your
> system handle? What kind of behavior do you aim at tracking in storing
> neurons?

System can create new object instances, dereference ObjectID to object
reference, remove reference to persistent object from current context.
When all references to some instance are removed from all contexts this
instance is removed from storage.

also i support undo/redo transactions.

persistent objects can contain methods. one persistent object can
invoke method from another persistent object. objects are instantiated
in ram by demand. so it is not needed to load all instances from
storage file at system startup.

> domain are RM counterpart of mathematical ensembles of values. For
> instance, let's say you have ensemble of values (neuron1, neuron2,
> neuron3, neuron4, neuron5). Can you define a data type involving only
> (neuron2, neuron4, neuron5) ? How? When(at execution time? compile
> time?)

I don't need this feature. All instances are objects derived from
System.Object. objects can implement some interfaces. If object
implement some interface which needed in some context - you can use
this instance. If not - you can check this and decide what you want to
do with this object. Constraints can be implemented in property getters
and setters. It is not a RM database at all.

You should develop your class by C#, compile it. Register assembly and
types in DB and then you can use your types. If you will use structured
serialization it is ok to recompile assembly from one run to another.


WBR,
Dmitry

Dmitry Shuklin

unread,
Jul 4, 2006, 11:16:01 AM7/4/06
to

Bob Badour wrote:

> What the hell do you think is innovative about re-inventing something
> that informed people abandoned 30 years ago?

heh, all today playing with XML and XPath are abandoned ))
Tree is just a subset of network. Table are subset too. Why do you
think that network DB cant merge all capabilities from XML (trees) and
RM (tables)?

Dmitry Shuklin

unread,
Jul 4, 2006, 11:28:10 AM7/4/06
to
Hi Cimode

> Implementing an elementary storage retrieval mechanism for targetted
> purpose is one thing and does not allow to claim it makes a logical
> model for data management. Implementing a general purpose DBMS creates
> a need to answer several logical questions about operations, data
> integrity, correctenedd etc...Check other posts for what kind of
> operations and characteristics your technology needs to support to be
> sound...Good luck ;)

yes i know that current version is not prefect and has many limitations
which current RDBMS doesn't have. But rectrictions of current version
doesn't make all idea wrong. may be the same complex question -
transaction isolation. for OODBMS with active server it more complex
task then for RDBMS

many of OODB can't implement object views and joins. current version of
my DB can. also current version already supports undo/redo
transactions. It makes sense to use this DBMS as document file format
in CAD/CAM applications.

Dmitry Shuklin

unread,
Jul 4, 2006, 11:41:24 AM7/4/06
to

Bob Badour wrote:
> Look, if you are completely ignorant of the last 50 years of computing,
> I have no intention of trying to educate you in a usenet post.

even instead of this - my db works ))) and of course i am not ignoring
all 50 years of computing. I am just found little another way. And this
way not completely new or unique. graphs theory, semantic network
theory, frames, neural networks, ... all of them required network
storage. and, network storage it is not a hierarchical storage.

> You are focusing on structure to the exclusion of integrity and
> manipulation. That's just a dumb mistake.

no, i don't focusing only on structure. i am focusing on all DB aspects
as they should be visible from one OODB user. Now i just don't have
resources to implement multithread and multi-user OODB kernel. And
even if i can implement it - it not optimal decision. For example
without series of experiments i will spend time to implement ODMG
object identification conception, (1 OID == 1 instance). now i know
that this conception is not perfect. transactions - now i support
undo/redo (not only begin-commit-rollback) ...

and etc and etc.

WBR,
Dmitry

Cimode

unread,
Jul 4, 2006, 11:46:47 AM7/4/06
to

Dmitry Shuklin wrote:
> Hi Cimode
> > > Primary purpose - modeling neural system with up to 2000000000 neurons.
> > > 1 neuron == 1 object instance.
> > So you are saying that a specific implementation is the end of a entire
> > logical model based on applied mathematics (as a reminder, some people
> > have worked for more than 40 years onto creating RM). Don't you think
> > this is hasty?
>
> Why you think that i am ignoring these 40-60 years? Or why you think
> that i spent few days to my research? )) Of course i am using all that
> i can use. But RM is not unique model which can be used.
What other similar models are you refering to?

> > Good support for data type is the ability for instance to apply
> > specific querying operators on the data belonging to that data type.
>
> I am not implementing complete and independent DB. My application is
> based on Microsoft .NET Framework types library. I don't make my own
> programming language. C# is supported. My OODB is a dll which can be
> used from .NET So it is inherited all specific operations from standart
> .NET types. And all .NET functionality can be used when application
> working with DB. But inside persistent classes delegates and events are
> not supported.

I see...


> > Can you create a data type *neuron* that you can manipulate at wish?

> Of course. Just need to create yer anoter .NET class.

All right..What kind of operations are currently supported over neurons
data type?

> > for instance lets say neuron has a property *wavelength*...can you you
> > find all neurons with specific wavelength? superior to specific
> > wavelength?
>
> yes, but i don't support indexes for properties. current version
> supports only ObjectIDs index. so if needed search between count of
> objects you should use your own index implementation. It is very easy
> to use System.Collection.Hashtable and serialize it as part of some
> persistent object, for example.
>
> > What capability of operations involving neurons can your
> > system handle? What kind of behavior do you aim at tracking in storing
> > neurons?

> System can create new object instances, dereference ObjectID to object
> reference, remove reference to persistent object from current context.
> When all references to some instance are removed from all contexts this
> instance is removed from storage.

By operations I mean operators that can be applied to data of neuron
data type...
Can you for instance apply equal operator to state that 2 neurons are
equal? Can you find all neurons that fit a particular description, 2
particular description...How do you find for instance ALL neurons that
have a specific wavelength but not a particular configuration (assuming
wavelength and configuration being properties applyable of neuron)?

> also i support undo/redo transactions.

How do you support read consistency...For instance what happens when
you begin a insert transaction with committing over a table then run a
select over the same table...What kind of version of the table does you
select return?

> persistent objects can contain methods. one persistent object can
> invoke method from another persistent object. objects are instantiated
> in ram by demand. so it is not needed to load all instances from
> storage file at system startup.
>
> > domain are RM counterpart of mathematical ensembles of values. For
> > instance, let's say you have ensemble of values (neuron1, neuron2,
> > neuron3, neuron4, neuron5). Can you define a data type involving only
> > (neuron2, neuron4, neuron5) ? How? When(at execution time? compile
> > time?)
>
> I don't need this feature. All instances are objects derived from
> System.Object. objects can implement some interfaces. If object
> implement some interface which needed in some context - you can use
> this instance. If not - you can check this and decide what you want to
> do with this object. Constraints can be implemented in property getters
> and setters. It is not a RM database at all.

Are you stating this is always done at run time and not by
definition...Mmmm, RM does not require to run anything in this kind of
situation. This kind of segregation is done at compile time which
saves resources...

> You should develop your class by C#, compile it. Register assembly and
> types in DB and then you can use your types. If you will use structured
> serialization it is ok to recompile assembly from one run to another.

I do not know C# but I have complied classes in VB DOT NET. Running it
is one thing but making it a DBMS is another thing.

> WBR,
> Dmitry

Dmitry Shuklin

unread,
Jul 4, 2006, 12:20:44 PM7/4/06
to
Hi Cimode,

> What other similar models are you refering to?

graphs theory, semantic network, frames, neural networks, hierarchical
semantic network, M-Network.

> > Of course. Just need to create yet anoter .NET class.


> All right..What kind of operations are currently supported over neurons
> data type?

Here is very important issue. I make this DB for neural networks but i
don't include any sample neuron implementation into kernel. Neuron
models can be implemented in separate DLLs and attached to DB.

For example i describe one of neuron model which i am using.

- adding link to another neuron
- removing link to another neuron
- scan all input neurones and compute neuron state.
- put current state to output linked neurones

all them are just a methods, implemented in class. when some neuron
receives thread (message) it can invoke some menthods from related
neurons. I have a brief articles on russian about neural network models
which i am using. briefly them equivalent to finite state grammar and
can be used to parse natural language (russian).

and there no specific neuron data type. there exists a set of
interfaces. some interfaces are mandatory for each neuron, some not.
there are many neuron types in one network. but all can communicate
with each other via interfaces.

Here links to my old english articles. But they are not about this DB.

http://www.shuklin.com/ai/ht/en/ai04001f.aspx
http://www.shuklin.com/ai/ht/en/ai00002f.pdf
http://www.shuklin.com/ai/ht/en/ai00007f.pdf
http://www.shuklin.com/ai/ht/en/ai00009f.pdf


> By operations I mean operators that can be applied to data of neuron
> data type...

they completly defined by developer as class methods

> Can you for instance apply equal operator to state that 2 neurons are
> equal?

1.yes,
2. models which i am using don't need this feature

> Can you find all neurons that fit a particular description, 2 particular description...

1.yes, O(N) in current version
2. models which i am using don't need this feature

> How do you find for instance ALL neurons that
> have a specific wavelength but not a particular configuration (assuming
> wavelength and configuration being properties applyable of neuron)?

You should scan collection of neurunes and invoke some methods from
neurones. Then decide what you want to do with each instance.


> How do you support read consistency...

It is single user OODB. Let say that this question is open

> For instance what happens when
> you begin a insert transaction with committing over a table then run a
> select over the same table...What kind of version of the table does you
> select return?

there are no tables as they are in RDB. and no inserts. and no selects.
as conceptions equivalent to RDB.

there are collections - collections are instances of objects too
you can create instance. find instance. destroy instance.
you can add existing instance into collection. one instance can be
added to different collections.

if you start transaction, and add some instance to some collection then
collection is marked as changed by this transaction. you will receive
new version of collection instance. collections stores only pointers.
no data. objects stores only methods. no data. attributes some times
are scalar objects. but they not parent objects. each node can have
many versions. anymore the same attriburte can belong to different
instances.


> This kind of segregation is done at compile time which
> saves resources...

In current version i support only runtime constraints. no declarative
style. only imperative.
In future - will see in future.


> I do not know C# but I have complied classes in VB DOT NET. Running it
> is one thing but making it a DBMS is another thing.

now it is the same ))

WBR,
Dmitry

Cimode

unread,
Jul 5, 2006, 4:25:04 AM7/5/06
to

Dmitry Shuklin wrote:
> Hi Cimode
>
> > Implementing an elementary storage retrieval mechanism for targetted
> > purpose is one thing and does not allow to claim it makes a logical
> > model for data management. Implementing a general purpose DBMS creates
> > a need to answer several logical questions about operations, data
> > integrity, correctenedd etc...Check other posts for what kind of
> > operations and characteristics your technology needs to support to be
> > sound...Good luck ;)
>
> yes i know that current version is not prefect and has many limitations
> which current RDBMS doesn't have. But rectrictions of current version
> doesn't make all idea wrong. may be the same complex question -
> transaction isolation. for OODBMS with active server it more complex
> task then for RDBMS
But transaction isolation is an imperative requirement to set up a
DBMS. Are you saying that you have not succeeded (yet?) into setting
it up?

Cimode

unread,
Jul 5, 2006, 4:43:08 AM7/5/06
to

Dmitry Shuklin wrote:
> Hi Cimode,
>
> > What other similar models are you refering to?
>
> graphs theory, semantic network, frames, neural networks, hierarchical
> semantic network, M-Network.
>
> > > Of course. Just need to create yet anoter .NET class.
> > All right..What kind of operations are currently supported over neurons
> > data type?
>
> Here is very important issue. I make this DB for neural networks but i
> don't include any sample neuron implementation into kernel. Neuron
> models can be implemented in separate DLLs and attached to DB.
>
> For example i describe one of neuron model which i am using.
>
> - adding link to another neuron
> - removing link to another neuron
> - scan all input neurones and compute neuron state.
> - put current state to output linked neurones
>
> all them are just a methods, implemented in class. when some neuron
> receives thread (message) it can invoke some menthods from related
> neurons. I have a brief articles on russian about neural network models
> which i am using. briefly them equivalent to finite state grammar and
> can be used to parse natural language (russian).
I believe this is a description of the computational operations your
system can perform on a specific implementation.

> and there no specific neuron data type. there exists a set of
> interfaces. some interfaces are mandatory for each neuron, some not.
> there are many neuron types in one network. but all can communicate
> with each other via interfaces.

If you don't define a data type neuron what are the characteristics of
a *neuron*?

> Here links to my old english articles. But they are not about this DB.
>
> http://www.shuklin.com/ai/ht/en/ai04001f.aspx
> http://www.shuklin.com/ai/ht/en/ai00002f.pdf
> http://www.shuklin.com/ai/ht/en/ai00007f.pdf
> http://www.shuklin.com/ai/ht/en/ai00009f.pdf
>
>
> > By operations I mean operators that can be applied to data of neuron
> > data type...
>
> they completly defined by developer as class methods

You should note that RM allows to associate in a one-shot declarative
manner all operators and constraints over values that can be applied to
a specific ensemble of value. Based on your description (interfaces),
it seem that all equivalent need to be specified programmatically at
run time in a recurring manner. I doubt this constitutes a progress...

> > Can you for instance apply equal operator to state that 2 neurons are
> > equal?
>
> 1.yes,

How?


> 2. models which i am using don't need this feature

So you are stating that the sample data you are using for testing
determine how sound is an abstract model?

> > Can you find all neurons that fit a particular description, 2 particular description...
>
> 1.yes, O(N) in current version
> 2. models which i am using don't need this feature

Keep in mind that RM abstract level allows to dissociate this kind of
issue from particular context...Once you declare a data type neurons
and define all its attributes you can image all search combination of
attribute conditions.

> > How do you find for instance ALL neurons that
> > have a specific wavelength but not a particular configuration (assuming
> > wavelength and configuration being properties applyable of neuron)?
>
> You should scan collection of neurunes and invoke some methods from
> neurones. Then decide what you want to do with each instance.

What if you have 2 users doing the same thing over 3 trillions neurons,
who has priority? how is parallelism handled? throughput? Are the IO
accesses liner, bidimensional, direct image? How about RAM?

> > How do you support read consistency...
>
> It is single user OODB. Let say that this question is open

Then it is a single application developped on a single post not a real
server yet. A DBMS important ability is to behave like a server for
requests...

> > For instance what happens when
> > you begin a insert transaction with committing over a table then run a
> > select over the same table...What kind of version of the table does you
> > select return?
>
> there are no tables as they are in RDB. and no inserts. and no selects.
> as conceptions equivalent to RDB.

No inserts? no update? How do you keep track of your data? How do you
update it?

> there are collections - collections are instances of objects too
> you can create instance. find instance. destroy instance.
> you can add existing instance into collection. one instance can be
> added to different collections.
>
> if you start transaction, and add some instance to some collection then
> collection is marked as changed by this transaction. you will receive
> new version of collection instance. collections stores only pointers.

How about another user coming in? what version of data will he/she get?

> no data. objects stores only methods. no data. attributes some times

No data? I will use information instead...
Are you saying the system is meant to work only once? Where do you
store past information? How do you retrieve that past information?

> are scalar objects. but they not parent objects. each node can have
> many versions. anymore the same attriburte can belong to different
> instances.
>
>
> > This kind of segregation is done at compile time which
> > saves resources...
>
> In current version i support only runtime constraints. no declarative
> style. only imperative.
> In future - will see in future.

It seems to me you still have a long way to go before saying relational
is dead?
I will buy you a copy when done...

Dmitry Shuklin

unread,
Jul 5, 2006, 6:35:34 AM7/5/06
to
Hi Cimode

> But transaction isolation is an imperative requirement to set up a
> DBMS. Are you saying that you have not succeeded (yet?) into setting
> it up?

I have sayed already that it is experimental OODB. It has many of
limitations. I describe some in other posts. Yes, single user mode and
single thread kernel is restrictions of current version too.

I don't say that this DB is completed and ready to fight with Oracle
and MS )) But I am saying that this DB demonstrate teoretical
_possibility_ to Network OODBs be more powerful then current RDBMS.

Dmitry Shuklin

unread,
Jul 5, 2006, 7:13:24 AM7/5/06
to
Hi Cimode

> > > What other similar models are you refering to?
> >
> > graphs theory, semantic network, frames, neural networks, hierarchical
> > semantic network, M-Network.
> >
> > > > Of course. Just need to create yet anoter .NET class.
> > > All right..What kind of operations are currently supported over neurons
> > > data type?
> >
> > Here is very important issue. I make this DB for neural networks but i
> > don't include any sample neuron implementation into kernel. Neuron
> > models can be implemented in separate DLLs and attached to DB.
> >
> > For example i describe one of neuron model which i am using.
> >
> > - adding link to another neuron
> > - removing link to another neuron
> > - scan all input neurones and compute neuron state.
> > - put current state to output linked neurones
> >
> > all them are just a methods, implemented in class. when some neuron

> > receives thread (message) it can invoke some methods from related


> > neurons. I have a brief articles on russian about neural network models
> > which i am using. briefly them equivalent to finite state grammar and
> > can be used to parse natural language (russian).
> I believe this is a description of the computational operations your
> system can perform on a specific implementation.

yes

> > and there no specific neuron data type. there exists a set of
> > interfaces. some interfaces are mandatory for each neuron, some not.
> > there are many neuron types in one network. but all can communicate
> > with each other via interfaces.
> If you don't define a data type neuron what are the characteristics of
> a *neuron*?

Characteristics defined in interfaces. Of course some classes
implementing these interfaces must exists in some DLL. And this DLL
must be configured and attached to OODB.

> > Here links to my old english articles. But they are not about this DB.
> >
> > http://www.shuklin.com/ai/ht/en/ai04001f.aspx
> > http://www.shuklin.com/ai/ht/en/ai00002f.pdf
> > http://www.shuklin.com/ai/ht/en/ai00007f.pdf
> > http://www.shuklin.com/ai/ht/en/ai00009f.pdf
> >
> >
> > > By operations I mean operators that can be applied to data of neuron
> > > data type...
> >
> > they completly defined by developer as class methods
> You should note that RM allows to associate in a one-shot declarative
> manner all operators and constraints over values that can be applied to
> a specific ensemble of value. Based on your description (interfaces),
> it seem that all equivalent need to be specified programmatically at
> run time in a recurring manner. I doubt this constitutes a progress...

Hm, all declarative RM constraints in any cases must be implemented in
imperative language by some RDBMS. So from implementation point of view
it is the same. I don't say that declarativity is bad. I am saying
that declarative programming is not suppurted in current version.
Unfortunatelly is not supported. But it can be supported in a future.

> > > Can you for instance apply equal operator to state that 2 neurons are
> > > equal?
> > 1.yes,
> How?

they must override and implement System.Object.Equals()
then you can compare two instances.

> > 2. models which i am using don't need this feature
> So you are stating that the sample data you are using for testing
> determine how sound is an abstract model?

Sorry, I don't understand this question.

> > > Can you find all neurons that fit a particular description, 2 particular description...
> >
> > 1.yes, O(N) in current version
> > 2. models which i am using don't need this feature
> Keep in mind that RM abstract level allows to dissociate this kind of
> issue from particular context...Once you declare a data type neurons
> and define all its attributes you can image all search combination of
> attribute conditions.

Hm. I don't know all attributes even at runtime. Attributes can be
added and removed from each instance of neuron absolutelly independent
from all another network. Each neuron is unique. So in my models i
don't needed RM as conception at all. But i understand that it is very
useful conception. So I tryed to support many of its possibilities.

We should not merge neural model with OODB conceptions.

Neural model use some features of OODB and implement some features
which not implemented in OODB kernel. It is different things neural
network and OODB.

Neural network is implemented as application that uses OODB and stores
neurons as OODB objects.

> > > How do you find for instance ALL neurons that
> > > have a specific wavelength but not a particular configuration (assuming
> > > wavelength and configuration being properties applyable of neuron)?
> >
> > You should scan collection of neurunes and invoke some methods from
> > neurones. Then decide what you want to do with each instance.
> What if you have 2 users doing the same thing over 3 trillions neurons,
> who has priority? how is parallelism handled? throughput?

I have already sayed this. Current version is strongly single user.

> Are the IO
> accesses liner, bidimensional, direct image? How about RAM?

Interesting question. OODB restricts the amount of memory used by the
graph of objects or the neural network with larger quantities of class
instances. The most frequently used objects are left in the RAM, the
others are moved to the physical storage area and are loaded into the
RAM upon demand. It unloads the rarely used objects when other objects
are loaded to the RAM. The memory amount restriction allows not using
the paging file so that it significantly increases the modeling
performance of networks with larger quantities of class instances.


> > > How do you support read consistency...
> >
> > It is single user OODB. Let say that this question is open
> Then it is a single application developped on a single post not a real
> server yet. A DBMS important ability is to behave like a server for
> requests...

Current version is experimental single user desktop database engine.


> > > For instance what happens when
> > > you begin a insert transaction with committing over a table then run a
> > > select over the same table...What kind of version of the table does you
> > > select return?
> >
> > there are no tables as they are in RDB. and no inserts. and no selects.
> > as conceptions equivalent to RDB.
> No inserts? no update? How do you keep track of your data? How do you
> update it?

There are objects. No data. Objects has methods. Methods can change
objects attributes. Instead of insert you should create NEW object
instance. Then you can add this instance into number of collections.
Instead of update you should invoke some method from some objects. All
this can be done on C# or VB.NET

> > there are collections - collections are instances of objects too
> > you can create instance. find instance. destroy instance.
> > you can add existing instance into collection. one instance can be
> > added to different collections.
> >
> > if you start transaction, and add some instance to some collection then
> > collection is marked as changed by this transaction. you will receive
> > new version of collection instance. collections stores only pointers.
> How about another user coming in? what version of data will he/she get?

It is a single user desktop DB. In a far future I have plans to
implement isolation.

> > no data. objects stores only methods. no data. attributes some times
> No data? I will use information instead...

Ok. "no data" it is very bad definition. I want to say, that in common
scenario, when you write class on VB.NET this class has a non static
fields which contains data.

In my OODB these fields are not serialized into DB storage. You should
use OODB API to store object attributes into DB. This is like ASP.NET
ViewState.


> Are you saying the system is meant to work only once? Where do you
> store past information? How do you retrieve that past information?

I support undo / redo persistent transactions . System track all object
changes beetwen transactions. You can unso transaction, get past
information and then redo to get a current version of object. If you
needed you can shutdown DB and then restart. All history still remains
and undo / redo will still work ok. When you commit or rollback
transaction history is cleared and then undo / redo can't switch object
versions.

> > > This kind of segregation is done at compile time which
> > > saves resources...
> >
> > In current version i support only runtime constraints. no declarative
> > style. only imperative.
> > In future - will see in future.
> It seems to me you still have a long way to go before saying relational
> is dead?
> I will buy you a copy when done...

If you interested you can download sources of current version.
Unfortunatelly english documentation is absent there.

WBR,
Dmitry

Cimode

unread,
Jul 5, 2006, 8:04:45 AM7/5/06
to
How are they attached? What principle regulates system stability?
Don't this approach increase resource consumption at linkage editing
time. (compile/link/run?)

> > > Here links to my old english articles. But they are not about this DB.
> > >
> > > http://www.shuklin.com/ai/ht/en/ai04001f.aspx
> > > http://www.shuklin.com/ai/ht/en/ai00002f.pdf
> > > http://www.shuklin.com/ai/ht/en/ai00007f.pdf
> > > http://www.shuklin.com/ai/ht/en/ai00009f.pdf
> > >
> > >
> > > > By operations I mean operators that can be applied to data of neuron
> > > > data type...
> > >
> > > they completly defined by developer as class methods
> > You should note that RM allows to associate in a one-shot declarative
> > manner all operators and constraints over values that can be applied to
> > a specific ensemble of value. Based on your description (interfaces),
> > it seem that all equivalent need to be specified programmatically at
> > run time in a recurring manner. I doubt this constitutes a progress...
>
> Hm, all declarative RM constraints in any cases must be implemented in
> imperative language by some RDBMS. So from implementation point of view
> it is the same. I don't say that declarativity is bad. I am saying
> that declarative programming is not suppurted in current version.
> Unfortunatelly is not supported. But it can be supported in a future.

Yes. But it is declared once and it is stored as metadata in a
consistent framework of definitions. The approach you are suggesting
requires ongoing effort. This the soul of what constitutes a data
definition language.

> > > > Can you for instance apply equal operator to state that 2 neurons are
> > > > equal?
> > > 1.yes,
> > How?
>
> they must override and implement System.Object.Equals()
> then you can compare two instances.

OK. Keep in mind that RM allows to do that through computation of
adresses (through intersect operator) without involving the data
itself, then it just reads the data that is a product of the
computation.

What about other arbitrary operators that can apply to neurons? How do
you apply them?

> > > 2. models which i am using don't need this feature
> > So you are stating that the sample data you are using for testing
> > determine how sound is an abstract model?
>
> Sorry, I don't understand this question.

What I mean is that you seem to use predetermined sample test data to
build your application. RM allows to handle randomly defined data.
That what makes an abstract model as opposed to a specific
implementation.

Meaning that you need to load all objects in RAM if you want to count
them? What if you have 3 trillion of them? and just 1Gb RAM?

> > > there are collections - collections are instances of objects too
> > > you can create instance. find instance. destroy instance.
> > > you can add existing instance into collection. one instance can be
> > > added to different collections.
> > >
> > > if you start transaction, and add some instance to some collection then
> > > collection is marked as changed by this transaction. you will receive
> > > new version of collection instance. collections stores only pointers.
> > How about another user coming in? what version of data will he/she get?
>
> It is a single user desktop DB. In a far future I have plans to
> implement isolation.
>
> > > no data. objects stores only methods. no data. attributes some times
> > No data? I will use information instead...
>
> Ok. "no data" it is very bad definition. I want to say, that in common
> scenario, when you write class on VB.NET this class has a non static
> fields which contains data.

If I understand right, all class contains data and behavior?
Right...Keep in mind that RM implements unique physical data storage.
Data is stored once and only once. Your approach imposes redundancy of
data in each class. For instance, if the value 3 is stored in several
classes, then it would be stored only once in an RM system.

Cimode

unread,
Jul 5, 2006, 8:14:28 AM7/5/06
to
Don't bother with Oracle and SQL Server. I am not worried about
functional limitations but logical and abstract limitations. Doing
better than Oracle and SQLServer has been done several times and is not
that hard considering how far they get away from RM.

OTOH, the logical and abstract limitations you have explained tend to
prove that your experimental attempt can not constitute a logical
abstract model as RM is.

Still, I encourage you to pursue your effort but in a knowledgeable
manner about RM concepts that are light years from current
implementation. I suggest you read: Introduction to Database Systems
from CJ Date. Knowing better RM (<> SQL) will help you make a better
implementation and gain time through not doing or repeating mistakes.

Dmitry Shuklin

unread,
Jul 5, 2006, 9:48:35 AM7/5/06
to
Hi Cimode

> How are they attached?
Via configuration DB. It is like a RDB, it contains tables, rows, ...
but it is not RDB. It is RDB emulation on OODB. So tables, rows and
columns has different behavior. For example row can be contained in
many tables at one time. So if you change row field via one table it
automatically changed in all tables which contains this row.


> What principle regulates system stability?

transactions.


> Don't this approach increase resource consumption at linkage editing
> time. (compile/link/run?)

may be, may be not. it is relative to point of view. i afraid about run
time more then about devtime.


> Yes. But it is declared once and it is stored as metadata in a
> consistent framework of definitions.

Metadata must be processed at runtime in any case. So it is question of
optimizations. My runtime constraints can be faster then RM metadata or
not - i don't worry now about this question because have more
interesting problems and restrictions.


> This the soul of what constitutes a data
> definition language.

DDL is GREAT invention. I have plans to make declarative subsystem for
my OODB but i can't implement all features in one day.


> Keep in mind that RM allows to do that through computation of
> adresses (through intersect operator) without involving the data
> itself, then it just reads the data that is a product of the
> computation.

Are you sure that RM allows, not an some concrete RDBMS implementation?
I have implemented some optimization techniques too. This mean nothing.
Somebody can
optimize OODB or RDB more then I.


> What about other arbitrary operators that can apply to neurons? How do
> you apply them?

I have a couple of neuron and neuron network models implemented. All
different. OODB allows to me implement any neuron model inside class
methods.

May be more interesting topic is a structure of data model which stores
attributes of persistent classes.


> What I mean is that you seem to use predetermined sample test data to
> build your application. RM allows to handle randomly defined data.
> That what makes an abstract model as opposed to a specific
> implementation.

My DB also allows use many abstract models. For example, i implemented
simple RDB like database to store configuration metadata.


> > There are objects. No data. Objects has methods. Methods can change
> > objects attributes. Instead of insert you should create NEW object
> > instance. Then you can add this instance into number of collections.
> > Instead of update you should invoke some method from some objects. All
> > this can be done on C# or VB.NET
> Meaning that you need to load all objects in RAM if you want to count
> them? What if you have 3 trillion of them? and just 1Gb RAM?

No it is not needed load all network into RAM. You just need to load
one instance which holds count attribute. that is all.


> If I understand right, all class contains data and behavior?

Yes. All classes is a .NET classes. But none of classes fields are
serialized into stroage automatically. Developer should use CerebrumAPI
to store data into network storage. It is limitation because i don't
want invent new language. I want to be compatible with .NET Framework


> Right...Keep in mind that RM implements unique physical data storage.
> Data is stored once and only once. Your approach imposes redundancy of
> data in each class. For instance, if the value 3 is stored in several
> classes, then it would be stored only once in an RM system.

Hm, are you sure that RDBMS doing so? Hm are you sure that this
theoretical construct is applicable to real world computer systems? May
be i don't understand you.

If i for example have 4 instances which have different name attributes
and all attibutes equals to 'name' and then i change attribute from
first cllass to 'name1' then all 4 classes must change name? of course
no. (note: you can implement such functionality in OODB if you need it)
May be the first instance should change pointer from interned string
'name' into interned string 'name1'. it is ok for me, but what if i
need the first scenario too?

WBR,
Dmitry

Dmitry Shuklin

unread,
Jul 5, 2006, 9:52:22 AM7/5/06
to
Hi Cimode


> OTOH, the logical and abstract limitations you have explained tend to
> prove that your experimental attempt can not constitute a logical
> abstract model as RM is.

I am never discussed here logical or abstract limitations of my data
model on which i am based my OODB. I am only discussed current version
limitations (by design limitations) which is available for download
today.

Cimode

unread,
Jul 5, 2006, 10:12:03 AM7/5/06
to
Therefore, you state that you are at implementation level solely.
Keep in mind that RM is not implementation level but abstract logical
application of mathematics. If you state that RM is dead because you
built some implementation, you assume they are of similar nature (which
obviously they are not).

I suggest you do some reading to help you...

Dmitry Shuklin

unread,
Jul 5, 2006, 10:36:06 AM7/5/06
to
Hi Cimode

> Keep in mind that RM is not implementation level but abstract logical
> application of mathematics.

I know this ))

> If you state that RM is dead because you
> built some implementation, you assume they are of similar nature (which
> obviously they are not).

No, I state that RDBMS will be dead in future and replaced by more
powerful implementation based on network data model.

WBR,
Dmitry Shuklin, Ph.D

Cimode

unread,
Jul 5, 2006, 10:38:54 AM7/5/06
to

Dmitry Shuklin wrote:
> Hi Cimode
>
> > How are they attached?
> Via configuration DB. It is like a RDB, it contains tables, rows, ...
> but it is not RDB. It is RDB emulation on OODB. So tables, rows and
> columns has different behavior. For example row can be contained in
> many tables at one time. So if you change row field via one table it
> automatically changed in all tables which contains this row.
You lost me. On one side, you told me there's no concept of table but
now you use table concepts...Please clarify...

> > What principle regulates system stability?
>
> transactions.

transactions is a package for encapsulating set of operations not
really an principle.
I was refering to principle as what set of concepts permit to guarantee
that the system is for instance less hardware dependent..

> > Don't this approach increase resource consumption at linkage editing
> > time. (compile/link/run?)
>
> may be, may be not. it is relative to point of view. i afraid about run
> time more then about devtime.

Maybe/maybe not? Does not this question require more particular
attention if you think that a system would be more performant.... In
most computing cycles the primary ressource consumer is not run time
(execution time) but more compile and link edit time.

If your primary worry is devtime, it is an additional reason to spend
it carefully by using sound logical principles on which to build on.
Do not base abstract reasonning on current implementations
technologies(DOT NET). By their semantics and current capabilities,
they should not guide your reasonning. The opposite should happen.

> > Yes. But it is declared once and it is stored as metadata in a
> > consistent framework of definitions.
> Metadata must be processed at runtime in any case. So it is question of

I have explained to you that metadata in RM would be treated at compile
time only.
run time is decomposed into compile/link/execute. The method you
suggest requires definitions to be executed no matter what. In RM,
definitions, value and operator constraints are implemented at compile
time only.

> optimizations. My runtime constraints can be faster then RM metadata or
> not - i don't worry now about this question because have more
> interesting problems and restrictions.

run time is not the same thing as execution time. I could somehow
imagine that execution system could be efficient, but from what you
have described compile time and linkage editing should relatively be
important.

> > This the soul of what constitutes a data
> > definition language.
>
> DDL is GREAT invention. I have plans to make declarative subsystem for
> my OODB but i can't implement all features in one day.

I understand.

> > Keep in mind that RM allows to do that through computation of
> > adresses (through intersect operator) without involving the data
> > itself, then it just reads the data that is a product of the
> > computation.
>
> Are you sure that RM allows, not an some concrete RDBMS implementation?

Yes that's how RM deals with this issue. There are unfortunately no
RDBMS existing today.

> I have implemented some optimization techniques too. This mean nothing.
> Somebody can
> optimize OODB or RDB more then I.


> > What about other arbitrary operators that can apply to neurons? How do
> > you apply them?
>
> I have a couple of neuron and neuron network models implemented. All
> different. OODB allows to me implement any neuron model inside class
> methods.
>
> May be more interesting topic is a structure of data model which stores
> attributes of persistent classes.
>
>
> > What I mean is that you seem to use predetermined sample test data to
> > build your application. RM allows to handle randomly defined data.
> > That what makes an abstract model as opposed to a specific
> > implementation.
>
> My DB also allows use many abstract models. For example, i implemented
> simple RDB like database to store configuration metadata.

I believe you implemented a SQL Table like DB.

> > > There are objects. No data. Objects has methods. Methods can change
> > > objects attributes. Instead of insert you should create NEW object
> > > instance. Then you can add this instance into number of collections.
> > > Instead of update you should invoke some method from some objects. All
> > > this can be done on C# or VB.NET
> > Meaning that you need to load all objects in RAM if you want to count
> > them? What if you have 3 trillion of them? and just 1Gb RAM?
>
> No it is not needed load all network into RAM. You just need to load
> one instance which holds count attribute. that is all.

So you store the count value? .

> > If I understand right, all class contains data and behavior?
>
> Yes. All classes is a .NET classes. But none of classes fields are
> serialized into stroage automatically. Developer should use CerebrumAPI
> to store data into network storage. It is limitation because i don't
> want invent new language. I want to be compatible with .NET Framework
>
>
> > Right...Keep in mind that RM implements unique physical data storage.
> > Data is stored once and only once. Your approach imposes redundancy of
> > data in each class. For instance, if the value 3 is stored in several
> > classes, then it would be stored only once in an RM system.
>
> Hm, are you sure that RDBMS doing so? Hm are you sure that this
> theoretical construct is applicable to real world computer systems? May
> be i don't understand you.

Again there's no such thing as an RDBMS already implemented. The
description I provided is what a system should be able to do to be
called relational. There are no proofs and reason it would be
impossible to build it and some attempts are currently progressing.
Check Dataphor for more info.

Cimode

unread,
Jul 5, 2006, 10:40:09 AM7/5/06
to
So yo are stating that somthing no existing yet is already dead. ;)

> WBR,
> Dmitry Shuklin, Ph.D

Ed Prochak

unread,
Jul 5, 2006, 12:29:03 PM7/5/06
to

Dmitry Shuklin wrote:
> Hi,
>
> > Give just ONE example. I sincerely doubt there is anything you can do
> > in a network model DB that cannot be done at least as well in a
> > Relational model DB.
>
> Trees )) I think You understand what I mean. Of course on the same
> abstraction level as the relational model works. You can emulate trees
> on RMD. But it will cause more abstraction levels to appear.

Joe Celko has an approach for handling trees in SQL. It is more
difficult than in a network model since elements in a tree form an
ordered set, while the Relational model deals with unordered sets. But
if that's your only flaw for Relational model, that's a pretty weak
arguement.

>
> In fact i am interested in emulation of artificial neural network.
> Making ANN with SQL - ha ha ha.

I've never done Nueral nets, but I was told once it is implemented with
a set of weighting tables. Maybe that is an old approach. While that
may be your primary purpose, your comments about the Relational model
being outdated were not limited to that specific application area. So
it again goes back to my comment about how your present your argument.
>
> > Sorry, but all I see on that page is a couple claims, no supporting
> > data. I will not download some unknown executable. Make a case without
> > having us run your program for you.
>
> Sorry, i don't have any artiles on English describing my OODB research
> yet (((
> And even when you download zip you can find there only C# sources. no
> documentation (((
>
> I know, i know (((
>
> What differ my DB from the rest? :
>
> - one object can have a many ObjectIDs
> - one ObjectID can address many different object instances
> - multilevel undo/redo transactions are supported
>
> What restrictions current version has?
> - only single user mode.
> - only single thread.
>
>
> WBR,
> Dmitry

Good luck with your research.
Ed

Dmitry Shuklin

unread,
Jul 5, 2006, 12:30:33 PM7/5/06
to
Hi Cimode

> > > How are they attached?
> > Via configuration DB. It is like a RDB, it contains tables, rows, ...
> > but it is not RDB. It is RDB emulation on OODB. So tables, rows and
> > columns has different behavior. For example row can be contained in
> > many tables at one time. So if you change row field via one table it
> > automatically changed in all tables which contains this row.
> You lost me. On one side, you told me there's no concept of table but
> now you use table concepts...Please clarify...

Yea, there are no tables as concept and at the same time there are
"tables".

On the most low logic level of DB the network of nodes or or-graph is
exists. There are no tables, rows, objects, ... only nodes and links.
All links are single directional. Each link has a "color" or
identifier. Each node can have as many links as needed. But each node
can have only one link with one color. There can't be two or more links
with the same identifier directed from one node. There can be many
links with the same color directed to the one node. There can be many
links with different identifier directed from one node to another. From
some node you can found other nodes only when you know identifier for
link to these nodes. Each node knows about and owns links dercted from
this node. Node doesn't know about links directed from another nodes.

Each node can have a .NET object instance attached. In the database the
root node is exists. the root is the beginning of the database. Each
persistent object can discover what node is attached to and root node.

links are just pointers. and all works very fast.

Thats all about Cerebrum.Runtime.dll

Also Cerebrum.Integrator.dll has some meta information implemented.
Each node(object) at the same time can be considered as collection of
related objects. If we take two nodes and call one 'columns collection'
then call another 'rows collection' as result we get a 'table' of rows.
Table is a node related to two another nodes - columns and rows. Each
row is a node too. And as node each row has links to some related
nodes. Columns collection contans links to some another nodes. They are
AttributeDescriptors. Attribute Descriptor knows about identifier
(color) of corresponding attribute. So when we have a rows collection
and columns collection we can navigate to row attribute instance. It is
a logical model of 'table'. Also there table of tables is exists.


> I was refering to principle as what set of concepts permit to guarantee
> that the system is for instance less hardware dependent..

I think it is absolutelly hardware independent - on the logical
abstraction level. As implementation it depends from Win32 and .NET .
the same DB storage file works ok with .NET 1.1 or .NET 2.0 compiled.
So DB storage file format is independent from .NET version or from user
DLLs version where methods of user objects is implemented. It is
because of the low logic level as single direction graph.

> > > Don't this approach increase resource consumption at linkage editing
> > > time. (compile/link/run?)
> >
> > may be, may be not. it is relative to point of view. i afraid about run
> > time more then about devtime.
> Maybe/maybe not? Does not this question require more particular
> attention if you think that a system would be more performant.... In
> most computing cycles the primary ressource consumer is not run time
> (execution time) but more compile and link edit time.

Hm, i think that a run time and development time (not compile and link)
are matter.

> If your primary worry is devtime, it is an additional reason to spend
> it carefully by using sound logical principles on which to build on.

yes.

> Do not base abstract reasonning on current implementations
> technologies(DOT NET).

It is not logical restriction. It is by design. Even more, the Cerebrum
Kernel is written on C(not C++).
.NET and kernel glue is written on MC++ If i need i can port kernel to
some another plaform.

> By their semantics and current capabilities,
> they should not guide your reasonning. The opposite should happen.

yes. i understand.

> I have explained to you that metadata in RM would be treated at compile
> time only.
> run time is decomposed into compile/link/execute. The method you
> suggest requires definitions to be executed no matter what. In RM,
> definitions, value and operator constraints are implemented at compile
> time only.

I think that RM is theoretical concept and can't have a compile time,
is it?
I am about real compile time in real PC.

>There are unfortunately no RDBMS existing today

Too bad for RM ))


> > My DB also allows use many abstract models. For example, i implemented
> > simple RDB like database to store configuration metadata.
> I believe you implemented a SQL Table like DB.

Sorry for bad english. I want to say simple RDB-like database. A
database which similar but not equal to RDB. I talk about this earlier
when describe how table is implemented as sub-graph

> > No it is not needed load all network into RAM. You just need to load
> > one instance which holds count attribute. that is all.
> So you store the count value? .

Yes. Collection stores value inside own instance. It is one of
optimization. The most optimization i done - O(1) when searching object
instance by it ID.


> Again there's no such thing as an RDBMS already implemented.

Ok then when i say 'RDBMS' - read 'table-based DBMS'. For me while it
don't exists yet it is no matter if MS SQL truely RDBMS or not. It just
called RDBMS.

WBR,
Dmitry

Cimode

unread,
Jul 5, 2006, 1:29:49 PM7/5/06
to
Brought additional few comments and I wish you good luck on your
implementation efforts...

True but the nature of the abstract concept of domain makes it possible
to segregate data by intersection at compile time...

> >There are unfortunately no RDBMS existing today
>
> Too bad for RM ))

Too bad for all of us...If RM was implemented we would be gaining
orders of magnitude in performance as opposed to current SQL DBMS

>
> > > My DB also allows use many abstract models. For example, i implemented
> > > simple RDB like database to store configuration metadata.
> > I believe you implemented a SQL Table like DB.
>
> Sorry for bad english. I want to say simple RDB-like database. A
> database which similar but not equal to RDB. I talk about this earlier
> when describe how table is implemented as sub-graph
>
> > > No it is not needed load all network into RAM. You just need to load
> > > one instance which holds count attribute. that is all.
> > So you store the count value? .
>
> Yes. Collection stores value inside own instance. It is one of
> optimization. The most optimization i done - O(1) when searching object
> instance by it ID.

But do you treat NULL values? How do you count all instances that
satisfy a specific condition? If it is statically stored in a point in
time?

Dmitry Shuklin

unread,
Jul 5, 2006, 2:05:56 PM7/5/06
to
Hi Cimode,


> But do you treat NULL values?

There is no NULL values as type or something existing in logic level.
But this concept is exists. For example. let assume that collection
'table' exists with 2 columns 'name' and 'value'. For example this
collection contains one object instance (row). This object (node) has
one link to anoter node colored by 'name' and doesn't have a link
colored by 'value'. So this row has value for 'name' column and 'NULL'
for 'value'.

In inmplementation level such cases handled by using
System.DBNull.Value

Also this row can has a link colored as 'value2' which is not visible
via collection 'table' but visible via collection 'table2'


> How do you count all instances that
> satisfy a specific condition? If it is statically stored in a point in
> time?

In logic level - by the same way as RDB it does.
In implementation (current version) by full scan only. I have plans to
implement indexes. It can be done by the same way as current RDBMS it
implements.

also objective joins and views are supported on logical level but
implemented very dirty yet. (DB has issues with garbage collection of
circular referenced nodes)

WBR,
Dmitry

Dmitry Shuklin

unread,
Jul 5, 2006, 3:42:30 PM7/5/06
to
Hi Ed,

> Joe Celko has an approach for handling trees in SQL. It is more
> difficult than in a network model since elements in a tree form an
> ordered set, while the Relational model deals with unordered sets. But
> if that's your only flaw for Relational model, that's a pretty weak
> arguement.

Yea, this is just ONE known example of flaw.
In my research trees doesn't have ordering by default. Ordering should
be implemented by developer. It is very easy because each link from a
node can be colored by identefier.

> I've never done Nueral nets, but I was told once it is implemented with
> a set of weighting tables. Maybe that is an old approach.

This is normal but not single possible approach to ANNs modeling. I am
using different approach - object oriented.

> Good luck with your research.

Thank You Ed

WBR,
Dmitry

Josip Almasi

unread,
Jul 6, 2006, 6:51:31 AM7/6/06
to
Dmitry Shuklin wrote:
>
> links are just pointers. and all works very fast.

But how about weighted graphs/synapses?
Pointers/references don't have weights...

Regards...

Dmitry Shuklin

unread,
Jul 6, 2006, 8:52:15 AM7/6/06
to
Hi Josip,

> But how about weighted graphs/synapses?
> Pointers/references don't have weights...

You are absolutelly right. Good question.

I have described logical model of OODB. It is not neural model. Neural
model should be emulated. Of course it can be done by many ways.

If needed to emulate whole bilogical neuron I prefer this:
Split one neuron into atomic pieces. Synapses, axons, dendrites, ...
Pieces depends from used model. Then make some class for each piece.
And then assemble all pieces into one neuron. So weght it is attribute
of Synapse object not attribute of link. Synapses are linked to a
neuron. And as i have restriction to make many links from one node to
another with one color i use inverted identification. Synapses are
linked not to soma, but to dendrite or axone. Dendrite or axone it is
just anoter node. But from all axones all links are colored by target
synapse ID. As each synapse should be unique it also should have an
unique OID. And as synapses connected not to soma directly i can scan
only input synapses (dendrite) or only output (axon)

It is some performance issue with this behaviour. So if i needed to
emulate something simplier i use another way. OODB supports solid
serialization. So it is possible to make some structure,

struct MyLink
{
NativeHandle linkedNeuronID;
float Weight;
string SomethingUseful;
}

and have a collection of theses structures as field of SomeNeuron
class. Then i implement IPersistent interface and save or load
collection when instance is serialized.

If some neuron has about 1000 links to another neurones i will suggest
second method. if neuron has 100000 or above links to anoter neurones i
will wuggest first method. It is because solid serialization must
serialize all links at once. And all of them required RAM when neuron
is instantiated. Instead of this built in links collections can load
only needed part of collection.

Cimode

unread,
Jul 6, 2006, 9:04:18 AM7/6/06
to
Sorry (Typo) I meant how do you handle missing data?

Dmitry Shuklin

unread,
Jul 6, 2006, 9:45:26 AM7/6/06
to
Hi Cimode,

> Sorry (Typo) I meant how do you handle missing data?

Missing where? If you about noise in input patterns it is absolutelly
independent from OODB and handled in neural model level. I have a
problems with noise in patterns, but who don't have ? )))

WBR,
Dmitry

Cimode

unread,
Jul 6, 2006, 10:12:21 AM7/6/06
to

No. I meant how do you deal with missing data? What for instance if
you have 3 trillions neurons that have a property stored and 1 trillion
neurons that do not have any value stored for that property (ex:
wavelength). How do you handle their counting? How do you count
number of neurons when applying wavelength search criteria? What kind
of logic do you apply? 2VL or 3VL?

Josip Almasi

unread,
Jul 6, 2006, 10:14:07 AM7/6/06
to
Dmitry Shuklin wrote:
> Hi Josip,
>
>>But how about weighted graphs/synapses?
>>Pointers/references don't have weights...
>
> You are absolutelly right. Good question.
>
> I have described logical model of OODB. It is not neural model. Neural
> model should be emulated. Of course it can be done by many ways.
>
> If needed to emulate whole bilogical neuron I prefer this:
> Split one neuron into atomic pieces. Synapses, axons, dendrites, ...

... sure, I usually do it somewhat like this:)

(In fact I've started writing an essay about that but didn't get much
interest, http://www.vrspace.org/docs/zen_of_self-adaptive_code-jitsu.html)

And I use soft references so I don't need all objects in memory but keep
some MRU cached etc etc.
However, when working with say 10000000 'neurons', I don't even have
enough RAM for references.
It occurred to me that such a reference 'weight' would be quite usefull
hint for decision what to keep in memory.
In OO model such 'weight' could turn to i.e. simple integer referrer
count divided by class tree depth. Or maybe more simillar to synapse
weight - object access count (handle with care).
Or something; I'm improvising:)
FTR simple MRU gives me about 97% cache hit rate.
Well, my 2c, keep up the good work.

Regards...

Dmitry Shuklin

unread,
Jul 6, 2006, 12:02:09 PM7/6/06
to
Hi Cimode,

> No. I meant how do you deal with missing data? What for instance if
> you have 3 trillions neurons that have a property stored and 1 trillion
> neurons that do not have any value stored for that property (ex:
> wavelength). How do you handle their counting? How do you count
> number of neurons when applying wavelength search criteria? What kind
> of logic do you apply? 2VL or 3VL?

I understand. This is completly non neural problem. But for business
aplications it is very important question. select count(*) or select
count (field)

I think that SQL implementation is well. count (*) returns 3 and count
(field) should return 2. But, as I said before, I dont support
declarative queries, so this question is completly open and should be
resolved by developer, as he wish.

Cimode

unread,
Jul 6, 2006, 12:07:44 PM7/6/06
to

Dmitry Shuklin wrote:
> Hi Cimode,
>
> > No. I meant how do you deal with missing data? What for instance if
> > you have 3 trillions neurons that have a property stored and 1 trillion
> > neurons that do not have any value stored for that property (ex:
> > wavelength). How do you handle their counting? How do you count
> > number of neurons when applying wavelength search criteria? What kind
> > of logic do you apply? 2VL or 3VL?
>
> I understand. This is completly non neural problem. But for business
> aplications it is very important question. select count(*) or select
> count (field)
You mean that you don't need to count neurons in a scientific purpose?
How do you record scientfic observations you will make. How are you
going to do statistical analysis? For a system built for scientific
purpose, it seems strange to me...

> I think that SQL implementation is well. count (*) returns 3 and count
> (field) should return 2. But, as I said before, I dont support
> declarative queries, so this question is completly open and should be
> resolved by developer, as he wish.

Understood.

Dmitry Shuklin

unread,
Jul 6, 2006, 12:15:13 PM7/6/06
to
Hi Josip,

> (In fact I've started writing an essay about that but didn't get much
> interest, http://www.vrspace.org/docs/zen_of_self-adaptive_code-jitsu.html)

It is very interesting. THX!

> And I use soft references so I don't need all objects in memory but keep
> some MRU cached etc etc.

Exactly!

> However, when working with say 10000000 'neurons', I don't even have
> enough RAM for references.

Hm, I am swapping references too. So it is enough to have 1MB to run
1GB network with ... small performance )))


> FTR simple MRU gives me about 97% cache hit rate.

Yes.

It is very strange for me, what i have in experiment. When i disable
MRU and load all network into Windows virtual memory (1GB can be
loaded), it is many more times worse then when i limit cache to 1 MB
and use my own swapping based on MRU (less than 1% cache hit rate with
1MB and full scan). Windows OS swapping kills the performance
completly. It takes hours to execute network when using Windows OS
virtual memory. And minutes with 1MB RAM to scan 1GB on harddrive.

WBR,
Dmitry

Dmitry Shuklin

unread,
Jul 6, 2006, 12:34:35 PM7/6/06
to
Hi Cimode,

> You mean that you don't need to count neurons in a scientific purpose?

Yes, I don't need count neurons or serch it by something like select
neurons where dendrite->current_value < 0.5 and axon->synapse->wight <
0.5 Hm i don't need but may be some one need. So i tryed to support and
such neural models as good as it don't beat on my models.

Why ? - My network is like a von Neuman cellural automata. Each neuron
knows only about few (< 10000) nearest neurones. and each neuron can
communicate (send/receive spikes or messages) only with these neurones
wich connected to. It is very closer to real biological neural network
as i think.

> How do you record scientfic observations you will make. How are you
> going to do statistical analysis? For a system built for scientific
> purpose, it seems strange to me...

I don't understand. Most powerful neural network that i have can parse
russian texts and it is like standard stack grammar. (almost equivalent
to stack grammar).

B Faux

unread,
Jul 6, 2006, 4:12:43 PM7/6/06
to
Dmitry;

Save yourself some time (may be too late) - take a look at
www.intersystems.com the "cache" DB is based on the old MUMPS DBMS which was
supposedly a 'node-based' data storage and retrieval system first developed
for medical applications (like neural networks?)

As Mr. Badour has said elsewhere in this thread (more than once) - you may
be "plowing old ground." Cache already supports everything you have listed
that you do, but it also has multiple users, multiple threads, transaction
triggers, SQL query support, and lots more; all without tables - but not
compliant with strict RM rules (but what is?)

BFaux


"Dmitry Shuklin" <shu...@bk.ru> wrote in message
news:1152128550.2...@p79g2000cwp.googlegroups.com...

Ed Prochak

unread,
Jul 6, 2006, 5:10:10 PM7/6/06
to

Dmitry Shuklin wrote:
> Hi Cimode
>
> > > > How are they attached?
> > > Via configuration DB. It is like a RDB, it contains tables, rows, ...
> > > but it is not RDB. It is RDB emulation on OODB. So tables, rows and
> > > columns has different behavior. For example row can be contained in
> > > many tables at one time. So if you change row field via one table it
> > > automatically changed in all tables which contains this row.
> > You lost me. On one side, you told me there's no concept of table but
> > now you use table concepts...Please clarify...
>
> Yea, there are no tables as concept and at the same time there are
> "tables".
>
> On the most low logic level of DB the network of nodes or or-graph is
> exists. There are no tables, rows, objects, ... only nodes and links.
> All links are single directional. Each link has a "color" or
> identifier. Each node can have as many links as needed. But each node
> can have only one link with one color. There can't be two or more links
> with the same identifier directed from one node. There can be many
> links with the same color directed to the one node. There can be many
> links with different identifier directed from one node to another. From
> some node you can found other nodes only when you know identifier for
> link to these nodes. Each node knows about and owns links dercted from
> this node. Node doesn't know about links directed from another nodes.

Seems like classic network structure (I'm not sure it is a classic
network model. The fact that every node is essentially independent and
apparently "freeform" (knowing the structure of the parent tells you
little or nothing about a child node).

Problems I've seen with network databases:
sometimes there is not way to get directly to a given bit of data. You
have to walk the network instead.

Links are fast for access, but updates can be a heavy operation,
changing LOTS of pointers. A consequence of this is error recovery. An
update that is only partially completed when a system crash occurs
(power still gets lost even these days) can wreck the DB. Some DB have
functions to "rebuild the links". This can make crash recovery very
time consuming and error prone.

>
> Each node can have a .NET object instance attached. In the database the
> root node is exists. the root is the beginning of the database. Each
> persistent object can discover what node is attached to and root node.
>
> links are just pointers. and all works very fast.

maintaining pointers in the achilles heal of network model.

[]


> > > No it is not needed load all network into RAM. You just need to load
> > > one instance which holds count attribute. that is all.
> > So you store the count value? .
>
> Yes. Collection stores value inside own instance. It is one of
> optimization. The most optimization i done - O(1) when searching object
> instance by it ID.

ID or pointer value? Do you expose the internal link values to the
application?
(bad idea IMHO)

>
>
> > Again there's no such thing as an RDBMS already implemented.
>
> Ok then when i say 'RDBMS' - read 'table-based DBMS'. For me while it
> don't exists yet it is no matter if MS SQL truely RDBMS or not. It just
> called RDBMS.
>
> WBR,
> Dmitry

Have a good day.
Ed

Bob Badour

unread,
Jul 6, 2006, 6:19:48 PM7/6/06
to
B Faux wrote:

> Dmitry;
>
> Save yourself some time (may be too late) - take a look at
> www.intersystems.com the "cache" DB is based on the old MUMPS DBMS which was
> supposedly a 'node-based' data storage and retrieval system first developed
> for medical applications (like neural networks?)
>
> As Mr. Badour has said elsewhere in this thread (more than once) - you may
> be "plowing old ground." Cache already supports everything you have listed
> that you do, but it also has multiple users, multiple threads, transaction
> triggers, SQL query support, and lots more; all without tables - but not
> compliant with strict RM rules (but what is?)
>
> BFaux

Are you nuts? The RM doesn't have rules for network model dbmses. That's
what the network model is for.

Dmitry Shuklin

unread,
Jul 7, 2006, 6:12:43 AM7/7/06
to
Hi Faux,

> Cache already supports everything you have listed

I know about MUMPS. It isn't support all. For example it isn't support
active object server with .NET on its side. I want to write methods for
objects on C# and execute them on server side. Also i don't like MUMPS.
Just don't like. So i don't want to use them. In any case it is
powerful technology, you are absolutelly right. I think that it is
better to use cache then SQL for neural network models. But in this
case plain C is more better )))

WBR,
Dmitry

Dmitry Shuklin

unread,
Jul 7, 2006, 6:24:40 AM7/7/06
to
Hello Ed,


> Seems like classic network structure (I'm not sure it is a classic
> network model. The fact that every node is essentially independent and
> apparently "freeform" (knowing the structure of the parent tells you
> little or nothing about a child node).

Yes.

> Problems I've seen with network databases:
> sometimes there is not way to get directly to a given bit of data. You
> have to walk the network instead.

It is illusion that RDB is better in such cases. Lets compare RDB &
NDB.
For example we need to find some instance attribute by its id.name.

>From abstract logical point of view:

RDB: navigate to DB, navigate to Table, navigate to Row, navigate to
Field, get Value
NDB: navigate to DB, navigate to Node, navigate to Field, get Value

NDB is "faster" ))

>From implementation point of view:

RDB: navigate to DB, navigate to Index, navigate to Page, find Row,
find Value.
NDB: navigate to DB, navigate to Index, navigate to Page, find Node,
find Value.

The same.

> Links are fast for access, but updates can be a heavy operation,
> changing LOTS of pointers.

I don't see this. We can use the same optimization technique as current
RDBMS uses.
Logic abstraction level can be independent from implementation level.
Indexes are absent in RM too, and what? It is used well ))

The performance will be equal. Just data model will differ. And some
additional sugar in ODB will exists.

> A consequence of this is error recovery. An
> update that is only partially completed when a system crash occurs
> (power still gets lost even these days) can wreck the DB. Some DB have
> functions to "rebuild the links". This can make crash recovery very
> time consuming and error prone.

Who prohibit us use the same transaction mechanism as current RDB uses?
We can split pages and commit changes only after all transactions is
completed. From this point of view RDB and NDB are the same.


> ID or pointer value? Do you expose the internal link values to the
> application?

I am use soft pointers. It is indexed IDs. ID for an instance developer
must define by self. So he can define equal IDs for some different
objects and make objective JOINS like he can make JOINs in RDB.


WBR,
Dmitry

Josip Almasi

unread,
Jul 7, 2006, 9:53:48 AM7/7/06
to
Dmitry Shuklin wrote:
>
> It is very strange for me, what i have in experiment. When i disable
> MRU and load all network into Windows virtual memory (1GB can be
> loaded), it is many more times worse then when i limit cache to 1 MB
> and use my own swapping based on MRU (less than 1% cache hit rate with
> 1MB and full scan). Windows OS swapping kills the performance
> completly. It takes hours to execute network when using Windows OS
> virtual memory. And minutes with 1MB RAM to scan 1GB on harddrive.

Exactly. I've done it with java and got much the same result - about 100
times slower when windows swaps.
I don't get it... I use simple MRU and get 97% hit rate. If windoze
(jvm?) used the same simple MRU for pages it should get at some hits...
... seems they use some more advanced caching techniques;)

Regards...

Josip Almasi

unread,
Jul 7, 2006, 10:11:01 AM7/7/06
to
Ed Prochak wrote:
>
> Problems I've seen with network databases:
> sometimes there is not way to get directly to a given bit of data. You
> have to walk the network instead.

But note that things differ in _object_ databases - you may have a
repository of all object id's.

> Links are fast for access, but updates can be a heavy operation,
> changing LOTS of pointers. A consequence of this is error recovery. An
> update that is only partially completed when a system crash occurs
> (power still gets lost even these days) can wreck the DB. Some DB have
> functions to "rebuild the links". This can make crash recovery very
> time consuming and error prone.

Well I didn't really work with OODBs but with OR mappers. And I found
that if I keep polymorphism, I get more error resistant db. Although I
need to update more tables in object than in usual relational model,
these are all cheap atomic operations based on unique id's.

You are right in general of course. But object model/db is not a general
network.

Regards...

Bob Badour

unread,
Jul 7, 2006, 10:27:06 AM7/7/06
to
Josip Almasi wrote:

> Ed Prochak wrote:
>
> You are right in general of course. But object model/db is not a general
> network.

Yeah, it's even worse.

Dmitry Shuklin

unread,
Jul 7, 2006, 4:32:53 PM7/7/06
to
Hi Bob,

It is just a words.

WBR,
Dmitry

Ed Prochak

unread,
Jul 7, 2006, 5:04:56 PM7/7/06
to
Dmitry Shuklin wrote:
> Hello Ed,
>
>
> > Seems like classic network structure (I'm not sure it is a classic
> > network model. The fact that every node is essentially independent and
> > apparently "freeform" (knowing the structure of the parent tells you
> > little or nothing about a child node).
>
> Yes.
>
> > Problems I've seen with network databases:
> > sometimes there is not way to get directly to a given bit of data. You
> > have to walk the network instead.
>
> It is illusion that RDB is better in such cases. Lets compare RDB &
> NDB.
> For example we need to find some instance attribute by its id.name.
>
> >From abstract logical point of view:
>
> RDB: navigate to DB, navigate to Table, navigate to Row, navigate to
> Field, get Value
> NDB: navigate to DB, navigate to Node, navigate to Field, get Value
>
> NDB is "faster" ))

ASSUMING you can get directly to the desired node. That's a big
assumption. More likely it is:
NDB: navigate to DB, navigate to root Node, navigate to branch Node,
navigate to target Node, navigate to Field, get Value

Or do you have one grand parent node which points to EVERY node in the
DB?

>
> >From implementation point of view:
>
> RDB: navigate to DB, navigate to Index, navigate to Page, find Row,
> find Value.
> NDB: navigate to DB, navigate to Index, navigate to Page, find Node,
> find Value
>

> The same.

There shold be no indices in network model, only pointers. This is a
minor point until you include the fact that not all nodes are directly
accessible.
So it is more like:
NDB: navigate to DB, find pointer to root node, navigate to Page,
navigate to root Node,
[find pointer to branch Node, navigate to Page, navigate to branch
Node,]
find pointer to target Node, navigate to Page, navigate to branch Node,
find value.


>
> > Links are fast for access, but updates can be a heavy operation,
> > changing LOTS of pointers.
>
> I don't see this. We can use the same optimization technique as current
> RDBMS uses.

Consider a "popular" child node, that is a node that has links from
MANY other nodes pointing to it. Try deleting it. You now have to visit
every parent node and null out the pointer in each.
In a relational DB this is seldom a problem. Deleting a child row is a
totally separate operation.

Now consider a parent node. Trry deleting it. How do you handle child
nodes? Do they just float around waiting for garbage collection if that
was the last link to them?
In a relational DB, either: the delete is not allowed due to child
(Foreign key) constraints
or the delete cascades down to the child rows deleting them as well.

> Logic abstraction level can be independent from implementation level.
> Indexes are absent in RM too, and what? It is used well ))

Network model DBMS that I have seen typically don't need indices since
the connections are all by pointers. However an adhoc query may require
making a JOIN where a link does not exist forces a linear search. Maybe
this is the case when you are using an index. So the Network
implementation is no better, but no worse than Relational
implementation.

>
> The performance will be equal. Just data model will differ. And some
> additional sugar in ODB will exists.

But I thought your primary claim of advantage for the Network model was
performance. IMHO, network DB outperforms Relational DB for the cases
where the application data model is well defined and thus tuned to the
data it contains. So it's best case performance is greater than the
best from a RDBMS. But for adhoc queries or unanticipated application
changes, network model no longer has direct links to the data. So the
worst case performance in a network DBMS can be worse than the RDBMS.

>
> > A consequence of this is error recovery. An
> > update that is only partially completed when a system crash occurs
> > (power still gets lost even these days) can wreck the DB. Some DB have
> > functions to "rebuild the links". This can make crash recovery very
> > time consuming and error prone.
>
> Who prohibit us use the same transaction mechanism as current RDB uses?
> We can split pages and commit changes only after all transactions is
> completed. From this point of view RDB and NDB are the same.

So you guarantee links are NEVER corrupted? Sorry, but I don't believe
it. I've seen this problem in other network DBs.


>
>
> > ID or pointer value? Do you expose the internal link values to the
> > application?
>
> I am use soft pointers. It is indexed IDs. ID for an instance developer
> must define by self. So he can define equal IDs for some different
> objects and make objective JOINS like he can make JOINs in RDB.
>
>

So you just lost your big advantage. For every node access there is
also and index access. Two disc reads for each node.

> WBR,
> Dmitry

All I can say is I remain unconvinced of your thesis: that relational
DBs are dead.

Your created a network model DB that works extremely well for neural
net applications. Your extrapolation to the death of relational is a
big jump. I expect your DBMS will likely still fail as a corporate DB.
I'll continue to develop for Relational DBMS. It's been paying my
bills for some years now, and I expect it to continue for a long time
yet.

good luck.
Ed

Ed Prochak

unread,
Jul 7, 2006, 5:18:58 PM7/7/06
to

Josip Almasi wrote:
> Ed Prochak wrote:
> >
> > Problems I've seen with network databases:
> > sometimes there is not way to get directly to a given bit of data. You
> > have to walk the network instead.
>
> But note that things differ in _object_ databases - you may have a
> repository of all object id's.

Consider a simple query. let's say the database is for real estate. You
have objects for cities and homes. How about counting how many homes
colored grey in each city?

>
> > Links are fast for access, but updates can be a heavy operation,
> > changing LOTS of pointers. A consequence of this is error recovery. An
> > update that is only partially completed when a system crash occurs
> > (power still gets lost even these days) can wreck the DB. Some DB have
> > functions to "rebuild the links". This can make crash recovery very
> > time consuming and error prone.
>
> Well I didn't really work with OODBs but with OR mappers. And I found
> that if I keep polymorphism, I get more error resistant db. Although I
> need to update more tables in object than in usual relational model,
> these are all cheap atomic operations based on unique id's.
>
> You are right in general of course. But object model/db is not a general
> network.
>
> Regards...

But Dmitry is claiming network Model. At least he hasn't objected to my
calling his DB that and he has used the term himself.

I don't think I would have any great love for an object DB, especially
if it uses the typical garbage collection cleanup as OO languages seem
to be so fond.

but I note you build upon a RDBMS. leading me to think you agree that
the premise of this thread is false, even in the long term.

Where's the guys from the theory group? Have you guys nothing to say on
this matter?

Bob Badour

unread,
Jul 7, 2006, 6:00:18 PM7/7/06
to
Ed Prochak wrote:

> Dmitry Shuklin wrote:
>
>>Hello Ed,
>>
>>
>>
>>>Seems like classic network structure (I'm not sure it is a classic
>>>network model. The fact that every node is essentially independent and
>>>apparently "freeform" (knowing the structure of the parent tells you
>>>little or nothing about a child node).
>>
>>Yes.
>>
>>
>>>Problems I've seen with network databases:
>>>sometimes there is not way to get directly to a given bit of data. You
>>>have to walk the network instead.
>>
>>It is illusion that RDB is better in such cases. Lets compare RDB &
>>NDB.
>>For example we need to find some instance attribute by its id.name.
>>
>>>From abstract logical point of view:
>>
>>RDB: navigate to DB, navigate to Table, navigate to Row, navigate to
>>Field, get Value

This idiot is a complete moron. 'Navigate to' with the RM?!? Total nonsense.


>>NDB: navigate to DB, navigate to Node, navigate to Field, get Value
>>NDB is "faster" ))

Certainly, NDB navigates. It is not in any way faster.


> ASSUMING you can get directly to the desired node. That's a big
> assumption. More likely it is:
> NDB: navigate to DB, navigate to root Node, navigate to branch Node,
> navigate to target Node, navigate to Field, get Value
>
> Or do you have one grand parent node which points to EVERY node in the
> DB?

And what is the equivalent operation for Join? Project? Union?
Intersect? Existential Quantification? Universal Quantification? Restrict?

[irrelevancies snipped]

Marshall

unread,
Jul 7, 2006, 8:57:26 PM7/7/06
to
Ed Prochak wrote:
>
> Where's the guys from the theory group? Have you guys nothing to say on
> this matter?

What is there to say? Any claim about relational being dead
by someone who isn't even aware of what a relational dbms is is
not worth responding to. The guy has no vaguest clue about
data management, data theory, or the current state of the art.

In fact, he doesn't seem to be aware of state of the art 30 years ago.
Remember: those who get an F in history are doomed to repeat
it next semester.


Marshall

Bob Badour

unread,
Jul 7, 2006, 9:04:37 PM7/7/06
to
Marshall wrote:

And here I thought I already replied to him. What am I? Chopped liver?

Marshall

unread,
Jul 7, 2006, 9:16:49 PM7/7/06
to
Bob Badour wrote:
>
> And here I thought I already replied to him. What am I? Chopped liver?

Heh. The au courant phrase is "potted plant."


Marshall

Bob Badour

unread,
Jul 7, 2006, 10:42:47 PM7/7/06
to

What can I say? Since I retired, I have fallen behind.

Dmitry Shuklin

unread,
Jul 8, 2006, 2:59:35 PM7/8/06
to
Hi Ed,

> ASSUMING you can get directly to the desired node. That's a big
> assumption.

Yes, In my OODB it is possible. And it is very easy to implement. It is
just need to have direct pointer to node.

> More likely it is:
> NDB: navigate to DB, navigate to root Node, navigate to branch Node,
> navigate to target Node, navigate to Field, get Value

Root node can store pointers to all nodes. In any way root can be
hidden from application developer. Or on physical leval it is possible
to implement global index that have pointers to all nodes.

> Or do you have one grand parent node which points to EVERY node in the
> DB?

Yes. It is Cerebrum.Runtime.NativeSector In my OODB.


> > The same.
>
> There shold be no indices in network model, only pointers.

WHO prohibited use idexes in OODB, and why indices are used in RDB (RM
don't has indices too)))


> This is a
> minor point until you include the fact that not all nodes are directly
> accessible.

All nodes can be directly accessible or not - In Cerebrum it is choise
of developer.

> So it is more like:
> NDB: navigate to DB, find pointer to root node, navigate to Page,
> navigate to root Node,
> [find pointer to branch Node, navigate to Page, navigate to branch
> Node,]
> find pointer to target Node, navigate to Page, navigate to branch Node,
> find value.

We can store all scalar attributes in the same page as parent node.
(NTFS is implemented in such way). So on implementation level all
depends from implementation, not from model.


> Consider a "popular" child node, that is a node that has links from
> MANY other nodes pointing to it. Try deleting it. You now have to visit
> every parent node and null out the pointer in each.

Yes.

> In a relational DB this is seldom a problem. Deleting a child row is a
> totally separate operation.
>
> Now consider a parent node. Trry deleting it. How do you handle child
> nodes? Do they just float around waiting for garbage collection if that
> was the last link to them?

In current version of Cerebrum - yes. It is implemented exactly as you
described.

> In a relational DB, either: the delete is not allowed due to child
> (Foreign key) constraints
> or the delete cascades down to the child rows deleting them as well.

I don't see technical problems to implement constraints in OODB. I
don't implemented them in Cerebrum.


> Network model DBMS that I have seen typically don't need indices since
> the connections are all by pointers.

Yes, it is because and without inices it has performance better then
RDB ))) Of course in not all cases OODB is better than RDB without
indices. So it should have idices too.

> However an adhoc query may require
> making a JOIN where a link does not exist forces a linear search. Maybe
> this is the case when you are using an index. So the Network
> implementation is no better, but no worse than Relational
> implementation.

Agree. They can be implemented with the same perfomance. Its depends
from developer not from model.

But. Network OODB model has native support for trees, graphs, networks.
Also it allows classes inheritance and virtual methods. I think You
know how trees implemented in current RDBMS, shame ))) And what about
graphs? semantic networks?


> But I thought your primary claim of advantage for the Network model was
> performance. IMHO, network DB outperforms Relational DB for the cases
> where the application data model is well defined and thus tuned to the
> data it contains. So it's best case performance is greater than the
> best from a RDBMS. But for adhoc queries or unanticipated application
> changes, network model no longer has direct links to the data. So the
> worst case performance in a network DBMS can be worse than the RDBMS.

In the worst case it is possible to switch from pointers to indices. It
is complex task to implementation. I don't implement this yet. But i
don't see why it should be impossible in general.

> So you guarantee links are NEVER corrupted? Sorry, but I don't believe
> it. I've seen this problem in other network DBs.

Of course i can't guarantee this. but and RDB can't guarantee that
primary keys will be always up to date. I don't see differences here
between these models.

> So you just lost your big advantage. For every node access there is
> also and index access. Two disc reads for each node.

It is only for first time. Then DB switches to direct pointers.

> All I can say is I remain unconvinced of your thesis: that relational
> DBs are dead.

My thesis is 'table-based DBMS are potentially dead'.

WBR,
Dmitry

Ed Prochak

unread,
Jul 10, 2006, 8:34:54 AM7/10/06
to

Sorry, Bob. I have seen your posts and they make good points. I just
thought there would be more of an uproar from the theory side (like
several posters jumping in the thread). I know from a theory point of
view some of my own comments are nearly nonsense. 8^) So I expected to
be slammed a little as well.

At this point I'm convinced he has a potential niche market DBMS, but
nothing to suggest the Relational Model will fade away. Dmitry may be
a good neuroscientist, but not a good computer scientist.

I'll drop from the discussion at this point. Have fun.
Ed

Josip Almasi

unread,
Jul 10, 2006, 9:01:11 AM7/10/06
to
Ed Prochak wrote:
>
> Consider a simple query. let's say the database is for real estate. You
> have objects for cities and homes. How about counting how many homes
> colored grey in each city?

But what do you think how RDBMS does the trick?
It uses hierarchies (indexes) to fetch the data, then performs
intersection (on indexes if possible), then count.
OODB should do the same or alike.
If query language is what bothers you, imagine some OO SQL... i.e. HQL;)
Yeah really, check HQL:
http://www.hibernate.org/hib_docs/v3/reference/en/html/queryhql.html

> But Dmitry is claiming network Model. At least he hasn't objected to my
> calling his DB that and he has used the term himself.

Sure, these are my notes based on my OR mapping experience. I believe we
get much alike stiuation in... well, ON mapping:)
One can view such a network as separate trees: inheritance tree, member
tree, package tree... these trees do overlap and form a network;
however, pathfinding isn't general directed graph, it's tree.

> but I note you build upon a RDBMS. leading me to think you agree that
> the premise of this thread is false, even in the long term.

Well then, let me elaborate:)
RDBMS are good for OLTP, as they maintain referential integrity etc.
But normalization screws reporting.
So we got data warehousing DBMS that have nothing to do with relational
model, don't have transactions, don't bother with integrity etc etc, all
to get reporting better.
(BTW your real estate example in hypercubes goes like this:
pronounce city one dimension and color second dimension and run count.)

As for the purpose of application state persistence, they both suck, and
OODB will rock.
For OO apps that is. Most apps in OO languages are built on top of ER
model, cuz kids learn that in school:)

Now, I cannot honestly say that RDBMS are dead, as long as two of the
richest men on earth make their money from them, and push their money
into them. It's not much of a technical reason as you see;) But one
can't make OODB that would work in clusters without some serious $$$.
So for the time being, RDBMS will remain along.

WRT technical reasons... OK I'll give you an example.
I do use RDBMS for storage but the way I use it I could use dBase too.
I don't need referential integrity or cascades, since OO model takes
care of it.
As for transactions logs, checkpoints, rollback, rollforward, isolation
levels and other RDBMS buzzwords, it took me 5 hours to write these in
125 lines of code:
http://vrspace.cvs.sourceforge.net/vrspace/vrspace/src/main/org/vrspace/server/Transaction.java?revision=1.2&view=markup
This isn't production code, in fact I've never even tried it, as I never
needed it.
But my point is it gets much more simpler with OOP.
And it's not about object model vs relational model; it's _event_
model(s) that we get with OO languages that make things easier.

Regards...

Bob Badour

unread,
Jul 10, 2006, 9:35:44 AM7/10/06
to
Josip Almasi wrote:

Le plus ca change... I heard idiots making identical claims in 1991. At
the time, the statements exposed profound ignorance, and they still do.


> For OO apps that is. Most apps in OO languages are built on top of ER
> model, cuz kids learn that in school:)
>
> Now, I cannot honestly say that RDBMS are dead, as long as two of the
> richest men on earth make their money from them, and push their money
> into them.

But one can honestly say that OODBMS are dead. I said as much more than
10 years ago, and I have seen no evidence that anything has changed in
spite of an intervening standards effort, the tireless efforts of scores
of self-aggrandizing ignorants etc.


It's not much of a technical reason as you see;) But one
> can't make OODB that would work in clusters without some serious $$$.
> So for the time being, RDBMS will remain along.

I can only conclude you lack intelligence, education or both. The reason
RDBMS will remain for a long time is exactly the reason why OODBMS is
going nowhere: the foundations upon which each is built. One is founded
on modern mathematics and the other is founded on nothing much in
particular.


> WRT technical reasons... OK I'll give you an example.
> I do use RDBMS for storage but the way I use it I could use dBase too.

Why do you do that? Are you braindead or something?


> I don't need referential integrity or cascades, since OO model takes
> care of it.

Yeah, sure. Right.


> As for transactions logs, checkpoints, rollback, rollforward, isolation
> levels and other RDBMS buzzwords, it took me 5 hours to write these in
> 125 lines of code:
> http://vrspace.cvs.sourceforge.net/vrspace/vrspace/src/main/org/vrspace/server/Transaction.java?revision=1.2&view=markup

[rolls eyes]


> This isn't production code, in fact I've never even tried it, as I never
> needed it.

I see. You won't need it until you need it. And then what?


> But my point is it gets much more simpler with OOP.

Correction: Your ignorant assertion is it gets much more simpler with
OOP. Any informed and reasonably intelligent person will think you are a
nut just for saying it.


> And it's not about object model vs relational model; it's _event_
> model(s) that we get with OO languages that make things easier.

Is it? And what is the foundation of these event models you imagine? How
do they differ from triggered procedures?

Dmitry Shuklin

unread,
Jul 10, 2006, 11:29:11 AM7/10/06
to
Hi Bob

> And here I thought I already replied to him. What am I? Chopped liver?

If you about RM operations, i already wrote here. For example, JOINs
are supported by network OODB. In any case ALL RM concepts MUST be
mapped to implementation concepts. So there is no difference in
implementations performance for RM or NM.

Josip Almasi

unread,
Jul 10, 2006, 12:56:59 PM7/10/06
to
Bob Badour wrote:
>
> I can only conclude you lack intelligence, education or both. The reason
> RDBMS will remain for a long time is exactly the reason why OODBMS is
> going nowhere: the foundations upon which each is built. One is founded
> on modern mathematics and the other is founded on nothing much in
> particular.
>
>> WRT technical reasons... OK I'll give you an example.
>> I do use RDBMS for storage but the way I use it I could use dBase too.
>
> Why do you do that? Are you braindead or something?

I'm simply writing object oriented applications that need persistence,
and as you seem to have noticed (since you claim that OO model is
founded on nothing), RDBMS and ER model in general is obviously unfit
for the purpose.
Its a well know fact, here, have a look:

http://en.wikipedia.org/wiki/Object-Relational_impedance_mismatch

Access to objects in object-oriented programs is allegedly best
performed via interfaces that together provide the only access to the
internals of an object. Similarly, essential OOP concepts for classes of
objects, inheritance and polymorphism, are not supported by database
systems.
...
In particular, relational database transactions, as the smallest unit of
work performed by databases, are much larger than any operations
performed by objects in object-oriented design.
...

>> I don't need referential integrity or cascades, since OO model takes
>> care of it.
>
> Yeah, sure. Right.

Oh but it does, object cannot have reference to a deleted object
(referential integrity), simply setting a member to null prunes entire
subtree (cascade delete), setting member value to anything else reflect
this value to all referrer object (cascade update), etc, etc.

>> This isn't production code, in fact I've never even tried it, as I
>> never needed it.
>
> I see. You won't need it until you need it. And then what?

I suggest you rephrase your question to emphasize expected answer more
clearly.
Or why not just tell me, you _know_, right?

>> But my point is it gets much more simpler with OOP.
>
> Correction: Your ignorant assertion is it gets much more simpler with
> OOP. Any informed and reasonably intelligent person will think you are a
> nut just for saying it.

Well thank you for your correction; I'm glad no informed and reasonably
intelligent person have seen my writings yet.

> Is it? And what is the foundation of these event models you imagine? How
> do they differ from triggered procedures?

Errr... how about you take a course or two in OOP?
You might dig that in just a few hours...

Regards...

Gene Wirchenko

unread,
Jul 10, 2006, 1:20:17 PM7/10/06
to
On 8 Jul 2006 11:59:35 -0700, "Dmitry Shuklin" <shu...@bk.ru> wrote:

[Ed Prochak wrote:]

>> All I can say is I remain unconvinced of your thesis: that relational
>> DBs are dead.
>
>My thesis is 'table-based DBMS are potentially dead'.

Big whoop!

All people posting to these newsgroups are potentially dead.

Sincerely,

Gene Wirchenko

Bob Badour

unread,
Jul 10, 2006, 2:09:56 PM7/10/06
to
Josip Almasi wrote:

> Bob Badour wrote:
>
>>
>> I can only conclude you lack intelligence, education or both. The
>> reason RDBMS will remain for a long time is exactly the reason why
>> OODBMS is going nowhere: the foundations upon which each is built. One
>> is founded on modern mathematics and the other is founded on nothing
>> much in particular.
>>
>>> WRT technical reasons... OK I'll give you an example.
>>> I do use RDBMS for storage but the way I use it I could use dBase too.
>>
>> Why do you do that? Are you braindead or something?
>
>
> I'm simply writing object oriented applications that need persistence,
> and as you seem to have noticed (since you claim that OO model is
> founded on nothing), RDBMS and ER model in general is obviously unfit
> for the purpose.

What can I say to that? You are an idiot. You are clearly ignorant of
the most fundamental knowledge related to your professed field. You
would do well to heed Mark Twain's sage advice about keeping one's mouth
shut and letting everyone think one is an idiot instead of opening it
and removing all doubt.


> Its a well know fact, here, have a look:
>
> http://en.wikipedia.org/wiki/Object-Relational_impedance_mismatch

It's a well-known fact that OO is an extremely low-level procedural
computational model, and one can easily remove the impedance mismatch by
replacing OO with a higher-level declarative computational model. Duh.

[irrelevant nonsense snipped]


>>> I don't need referential integrity or cascades, since OO model takes
>>> care of it.
>>
>> Yeah, sure. Right.
>
> Oh but it does, object cannot have reference to a deleted object
> (referential integrity), simply setting a member to null prunes entire
> subtree (cascade delete), setting member value to anything else reflect
> this value to all referrer object (cascade update), etc, etc.

Uh, yean, sure. Right.


>>> This isn't production code, in fact I've never even tried it, as I
>>> never needed it.
>>
>> I see. You won't need it until you need it. And then what?
>
> I suggest you rephrase your question to emphasize expected answer more
> clearly.
> Or why not just tell me, you _know_, right?

You claim to have replaced a significant amount of data management code.
However, you have no proof nor even a test to suggest that you have.
Yet, when you need the recovery code, you will absolutely need it to
work 100% correctly the very first time.

You have replaced nothing you moron. Unfortunately, your poor victims
won't realise that until it's too late to do anything. Presumably, you
hope to have left before that happens.


>>> But my point is it gets much more simpler with OOP.
>>
>> Correction: Your ignorant assertion is it gets much more simpler with
>> OOP. Any informed and reasonably intelligent person will think you are
>> a nut just for saying it.
>
> Well thank you for your correction; I'm glad no informed and reasonably
> intelligent person have seen my writings yet.

With all due respect, you lack the competence to make that assumption.


>> Is it? And what is the foundation of these event models you imagine?
>> How do they differ from triggered procedures?
>
> Errr... how about you take a course or two in OOP?

Why? I have been doing OOP for 19 years now. What the hell do you think
some self-aggrandizing ignorant is going to teach me that I didn't
already learn (an perhaps reject as nonsense) a decade ago?

Ed Prochak

unread,
Jul 10, 2006, 2:26:29 PM7/10/06
to
Josip Almasi wrote:
> Ed Prochak wrote:
> >
> > Consider a simple query. let's say the database is for real estate. You
> > have objects for cities and homes. How about counting how many homes
> > colored grey in each city?
>
> But what do you think how RDBMS does the trick?
> It uses hierarchies (indexes) to fetch the data, then performs
> intersection (on indexes if possible), then count.
> OODB should do the same or alike.

Either OODB does something different and possibly better, than RDB,
Or OODB does the same as RDB and thus logically not better.

You cannot have it both better and the same.


> If query language is what bothers you, imagine some OO SQL... i.e. HQL;)
> Yeah really, check HQL:
> http://www.hibernate.org/hib_docs/v3/reference/en/html/queryhql.html
>
> > But Dmitry is claiming network Model. At least he hasn't objected to my
> > calling his DB that and he has used the term himself.
>
> Sure, these are my notes based on my OR mapping experience. I believe we
> get much alike stiuation in... well, ON mapping:)
> One can view such a network as separate trees: inheritance tree, member
> tree, package tree... these trees do overlap and form a network;
> however, pathfinding isn't general directed graph, it's tree.


>
> > but I note you build upon a RDBMS. leading me to think you agree that
> > the premise of this thread is false, even in the long term.
>
> Well then, let me elaborate:)
> RDBMS are good for OLTP, as they maintain referential integrity etc.
> But normalization screws reporting.
> So we got data warehousing DBMS that have nothing to do with relational
> model, don't have transactions, don't bother with integrity etc etc, all
> to get reporting better.

A datawarehouse is essentially static data. But building a DW does not
mean throwing out referential integrity, constraints, and other
features. Normalization does not necessarily scrrew reporting
performance. (at least performance is why most people denormalize
tables, the same report comes out either way).

And even denormalized, it is still a Relational model DB, whether it is
first normal form, or fifth normal form. Many production RBD instances
are mixtures of normalization levels.

> (BTW your real estate example in hypercubes goes like this:
> pronounce city one dimension and color second dimension and run count.)

That query syntax looke like it could work in a RDB as well, except it
need a way to get just the grey houses as I originally asked.

Internally how does the OODB process that? Are all house objects
somehow grouped together so it doesn't have to search every object of
the DB to find them?

>
> As for the purpose of application state persistence, they both suck, and
> OODB will rock.

Both? meaning Network model and Relational model?

> For OO apps that is. Most apps in OO languages are built on top of ER
> model, cuz kids learn that in school:)
>
> Now, I cannot honestly say that RDBMS are dead, as long as two of the
> richest men on earth make their money from them, and push their money
> into them. It's not much of a technical reason as you see;) But one
> can't make OODB that would work in clusters without some serious $$$.
> So for the time being, RDBMS will remain along.

LOTUS once ruled the spreadsheet market. Things change.
Dmitry's arguement at least initially was:
I built this great test database using the network model,
It solves my neural network problems much faster than a relational DB
therefore, Relational Model is dead (see subject of thread).

My point has long been that his argument is flawed.

Now regarding your arguement for OODB model: we are missing someone
like Codd to define the fundamental principles of the Object Model and
demonstrate the advantages over the Relational Model.

When that someone shows up let me know. (I like learning new things.)

meanwhile I'll stick with Codd.

>
> WRT technical reasons... OK I'll give you an example.
> I do use RDBMS for storage but the way I use it I could use dBase too.
> I don't need referential integrity or cascades, since OO model takes
> care of it.

Then you are storing a flat file model in in a RDBMS product, but you
are not using the Relational Model.

> As for transactions logs, checkpoints, rollback, rollforward, isolation
> levels and other RDBMS buzzwords, it took me 5 hours to write these in
> 125 lines of code:
> http://vrspace.cvs.sourceforge.net/vrspace/vrspace/src/main/org/vrspace/server/Transaction.java?revision=1.2&view=markup
> This isn't production code, in fact I've never even tried it, as I never
> needed it.
> But my point is it gets much more simpler with OOP.
> And it's not about object model vs relational model; it's _event_
> model(s) that we get with OO languages that make things easier.
>
> Regards...

I'll try to look that up sometime. (just not today).
I've done enough embedded programming to agree that event driven
programming makes many things easier. I'm just unsure how it would
apply to database design.

Have a good day.
Ed.

U-gene

unread,
Jul 10, 2006, 3:22:00 PM7/10/06
to
Just a remark.

IMHO this is usual set of fallacies exiting today.

1) RDBMS isn't system of data persisting only. RDM is not model of
persitance of data but it is data model. I think relational systems
could exist which aren't DBMS and these systems could be helpfull (i'm
about rel.keys and rel.operations here).
2) I'm sure - there is no impedance. At all. Impedance exists in your
heads only when you think about current implementations of RDM.But I
think that these implementations are just very special case of possible
implementation what is rich enough to be both OO and relational.

Josip Almasi

unread,
Jul 18, 2006, 6:36:04 AM7/18/06
to
Bob Badour wrote:
>
> What can I say to that?

Why, it's quite simple, once you run out of arguments just continue with
insults. It makes you feel important, especially when the other person
leaves to it, makes you feel you win.

> You are an idiot. You are clearly ignorant of
> the most fundamental knowledge related to your professed field. You
> would do well to heed Mark Twain's sage advice about keeping one's mouth
> shut and letting everyone think one is an idiot instead of opening it
> and removing all doubt.

See, just like that:))

Well too bad every crosspost turns into flame.

Regards.

Bob Badour

unread,
Jul 18, 2006, 8:19:58 AM7/18/06
to
Josip Almasi wrote:
> Bob Badour wrote:
>
>>
>> What can I say to that?
>
> Why, it's quite simple, once you run out of arguments just continue with
> insults. It makes you feel important, especially when the other person
> leaves to it, makes you feel you win.

There is no argument to such idiocy. Either one has intelligence and
knowledge sufficient to realise it is idiocy or one does not.
Apparently, you do not. Plonk.

Josip Almasi

unread,
Jul 18, 2006, 8:09:42 AM7/18/06
to
Ed Prochak wrote:
>
> Either OODB does something different and possibly better, than RDB,
> Or OODB does the same as RDB and thus logically not better.
>
> You cannot have it both better and the same.

Right.
My point is, I can have it better _for the purpose_.

> A datawarehouse is essentially static data.

Well, not exactly true. While you can't add dimensions (iow change
structure) just like that, you can feed it in (near) real time.
So it's as static as rdbms:)

> But building a DW does not
> mean throwing out referential integrity, constraints, and other
> features. Normalization does not necessarily scrrew reporting
> performance. (at least performance is why most people denormalize
> tables, the same report comes out either way).
>
> And even denormalized, it is still a Relational model DB, whether it is
> first normal form, or fifth normal form. Many production RBD instances
> are mixtures of normalization levels.

Right, IRL we mix models and metodologies to achieve better results.

> Internally how does the OODB process that? Are all house objects
> somehow grouped together so it doesn't have to search every object of
> the DB to find them?

Well I don't know of OODB internal workings.
Though I guess attributes need to be grouped, yes - indexed as Dmitry said.

>>As for the purpose of application state persistence, they both suck, and
>>OODB will rock.
>
> Both? meaning Network model and Relational model?

No, I ment relational and multidimensional model.
While first is better for oltp and second for olap, I expect network
model to be best for persistence (of OO apps).

> Dmitry's arguement at least initially was:
> I built this great test database using the network model,
> It solves my neural network problems much faster than a relational DB
> therefore, Relational Model is dead (see subject of thread).
>
> My point has long been that his argument is flawed.

Well, right, at least the way you put it.
I agree this one argument is not nearly enough so I started giving some
of my own;)
But of course, 'are dead' is (intentionally?;)) overstated.

> Now regarding your arguement for OODB model: we are missing someone
> like Codd to define the fundamental principles of the Object Model and
> demonstrate the advantages over the Relational Model.
>
> When that someone shows up let me know. (I like learning new things.)
>
> meanwhile I'll stick with Codd.

Suit yourself:)

I don't really think there's some big strictly defined mathematical
construct named 'object model', and I don't even think there's a need
for such a thing, in fact such a construct will only limit OOP, only to
make us find something more flexible;)
After all, these objects are nothing more than structures containing
pointers to data and pointers to functions. Instancing and other OOP
buzzwords only makes things easier.
If you look at it as data model, it's network.
Therefore, whatever naturally describes as network fits better to object
model; whatever naturally describes as sets, fits better to relational
model.

But relational model itself isn't enough for a real world app. That's
why we have cursors after all.
Most common example are substitutes; we have them from restaurants to
electronic stores:
BC108C is substitute for BC108B while BC180A is substitute for BC108C
etc etc, so we have self-referring entity.
How does sql query for 'show all substitutes for BC108B' look like?

Smells like networks are more general however. Each time we draw an ER
diagram we prove it;)

> Then you are storing a flat file model in in a RDBMS product, but you
> are not using the Relational Model.

Exactly.

> I've done enough embedded programming to agree that event driven
> programming makes many things easier. I'm just unsure how it would
> apply to database design.

In a sense it makes select sum(*) and count(*) etc obsolete - makes
aggregate generation easier (and much faster than triggers).
Aggregates don't need integrity - you can generate them on the fly
whenever, provided you store events as they come.
NN apps are all about aggregates. And they're not the only ones...
... well I ended using RDBMS as record manager:))

Regards...

Josip Almasi

unread,
Jul 18, 2006, 10:52:43 AM7/18/06
to
Bob Badour wrote:
>
> There is no argument to such idiocy. Either one has intelligence and
> knowledge sufficient to realise it is idiocy or one does not.
> Apparently, you do not. Plonk.

Well too bad you're far too intelligent to have arguments and far too
knowledgable to need them... why don't you just call me an idiot for a
change? Once you establish I'm idiot I cannot understand your arguments
even if you tell them so you don't ever need to bother... to pretend you
have any.

David Portas

unread,
Jul 18, 2006, 4:22:43 PM7/18/06
to
Josip Almasi wrote:
>
> No, I ment relational and multidimensional model.
> While first is better for oltp and second for olap, I expect network
> model to be best for persistence (of OO apps).

What multidimensional model? Kimball popularised some methodology and
some jargon under the Dimensional banner. Some people find such
terminology useful but it doesn't change the data model. It is still
relational or SQL. Do you think relational is something other than
multi-dimensional?

> I don't really think there's some big strictly defined mathematical
> construct named 'object model', and I don't even think there's a need
> for such a thing, in fact such a construct will only limit OOP, only to
> make us find something more flexible;)

So your preferred model is no model at all. Noted.

> How does sql query for 'show all substitutes for BC108B' look like?

A recursive CTE. More generally speaking the query is just some
restriction of a transitive closure (ie. it need not be defined
recursively).

>
> Smells like networks are more general however. Each time we draw an ER
> diagram we prove it;)

We prove no such thing. A network can always be represented
relationally by materializing some arbitrary pointers as attributes.

--
David Portas

Nicholas King

unread,
Jul 19, 2006, 5:49:38 AM7/19/06
to
David Portas wrote:
> Josip Almasi wrote:
<Snip database stuff>
Why are you spamming comp.ai.neuralnetworks with this offtopic material?
It's completely unrelated to neural networks and serves no purpose but
to clutter up the newsgroup.

Dmitry Shuklin

unread,
Jul 19, 2006, 9:19:27 AM7/19/06
to
Hi Nicholas,

> Why are you spamming comp.ai.neuralnetworks with this offtopic material?
> It's completely unrelated to neural networks and serves no purpose but
> to clutter up the newsgroup.

Sorry for this. My previous post was about neural network modeling with
OODB. Message inherited crosspost.

Dmitry Shuklin

unread,
Jul 19, 2006, 2:04:02 PM7/19/06
to
Hi David,


> A network can always be represented
> relationally by materializing some arbitrary pointers as attributes.

Table can always be represented networkally by materializing row values
as nodes linked to parent row node. And rows linked to parent table.
(In abstract model, and implemented very likely to current RDBMS with
the same performance)

Also in network model it easy to have row that linked to many tables in
one time, or value that linked to many rows in one time.

Also it is very interesting, that relation is not a table in NDB,
relation is a row. Row is subset of Cartesian product from all
avaliable values to all available attributes.

In RDB it is corresponds to Cartesian product from all abaliable
columns to all available domain values.

WBR,
Dmitry

Ed Prochak

unread,
Jul 19, 2006, 3:23:55 PM7/19/06
to
Josip Almasi wrote:
> Ed Prochak wrote:
> >
> > Either OODB does something different and possibly better, than RDB,
> > Or OODB does the same as RDB and thus logically not better.
> >
> > You cannot have it both better and the same.
>
> Right.
> My point is, I can have it better _for the purpose_.

And a Network Model can fit some application purposes. The cases I've
seen are where the queries against the data are relatively static (i.e.
few if any adhoc queries). Then the network model can out perform
relational basically because the indices are built into the entities.
But Relational far out performs Network when you go "off track".

So there can be applications for both models. there's just more
applications that fit the Relational Model, IMHO. (And judging from the
sales of Relational vs Network based DBMS products, I'd say a lot of
people agree with me.)

>
> > A datawarehouse is essentially static data.
>
> Well, not exactly true. While you can't add dimensions (iow change
> structure) just like that, you can feed it in (near) real time.
> So it's as static as rdbms:)

But that is the point, you only ever just load data. It doesn't change.
The sales invoices are not open work orders where the customer is
changing the line items. Once the sale is complete then you send all
the final details to the DW and it's done. It's static data.

>
> > But building a DW does not
> > mean throwing out referential integrity, constraints, and other
> > features. Normalization does not necessarily scrrew reporting
> > performance. (at least performance is why most people denormalize
> > tables, the same report comes out either way).
> >
> > And even denormalized, it is still a Relational model DB, whether it is
> > first normal form, or fifth normal form. Many production RBD instances
> > are mixtures of normalization levels.
>
> Right, IRL we mix models and metodologies to achieve better results.

The different normal forms ARE NOT different data models. There are
benefits and costs to going more or less normalized. At least the NFs
guide the choices. What design guidelines exist for Network Model? How
about for Object Oriented Model?

[]


>
> > Dmitry's arguement at least initially was:
> > I built this great test database using the network model,
> > It solves my neural network problems much faster than a relational DB
> > therefore, Relational Model is dead (see subject of thread).
> >
> > My point has long been that his argument is flawed.
>
> Well, right, at least the way you put it.
> I agree this one argument is not nearly enough so I started giving some
> of my own;)
> But of course, 'are dead' is (intentionally?;)) overstated.

Well thanks for agreeing that the subject is false. Relational is not
dead, now or even long term

>
> > Now regarding your arguement for OODB model: we are missing someone
> > like Codd to define the fundamental principles of the Object Model and
> > demonstrate the advantages over the Relational Model.
> >
> > When that someone shows up let me know. (I like learning new things.)
> >
> > meanwhile I'll stick with Codd.
>
> Suit yourself:)
>
> I don't really think there's some big strictly defined mathematical
> construct named 'object model', and I don't even think there's a need
> for such a thing, in fact such a construct will only limit OOP, only to
> make us find something more flexible;)

Flexibility and solid foundation are not mutually exclusive. Large
skyscrapers designed to sway in the wind, are built on bedrock
foundations. Good software is built in analogous ways. The most
flexible databases I've seen were designed for it. And just think about
Relational DBMS. The DBMS itself is built using the Relational model.

> After all, these objects are nothing more than structures containing
> pointers to data and pointers to functions. Instancing and other OOP
> buzzwords only makes things easier.
> If you look at it as data model, it's network.
> Therefore, whatever naturally describes as network fits better to object
> model; whatever naturally describes as sets, fits better to relational
> model.

Yes. Note that I never said Relational was necessarily better for OOP.
Or Neural network programming.

>
> But relational model itself isn't enough for a real world app. That's
> why we have cursors after all.

Huh? that doesn't make sense.

> Most common example are substitutes; we have them from restaurants to
> electronic stores:
> BC108C is substitute for BC108B while BC180A is substitute for BC108C
> etc etc, so we have self-referring entity.
> How does sql query for 'show all substitutes for BC108B' look like?

Bill of materials is the classic Hierarchical Model example and
supercession is the classic network model example. Both have been
implemented in Relation model databases. Dealing with supercession data
is a pain even in network model DBs.

>
> Smells like networks are more general however. Each time we draw an ER
> diagram we prove it;)

Your sense of smell may be off.

Network models can be better for data used in some applications that
doesn't suit the relational model. But there are other applications
whose data doesn't suit the network model. In fact I would conjecture
that there is not set of application data that can be represented in
one model that cannot also be represented in the other. It comes down
to a question of flexibility and performance. And in general Relational
Model implimentations win that battle.


>
> > Then you are storing a flat file model in in a RDBMS product, but you
> > are not using the Relational Model.
>
> Exactly.

So then why not use a flat file. You are gaining very little by forcing
a poor design into a DBMS (relational or other).


>
> > I've done enough embedded programming to agree that event driven
> > programming makes many things easier. I'm just unsure how it would
> > apply to database design.
>
> In a sense it makes select sum(*) and count(*) etc obsolete - makes
> aggregate generation easier (and much faster than triggers).

Huh? again. Triggers respond to events, so how is some other event
model going to be faster than triggers. I just really don't understand
this point.

> Aggregates don't need integrity - you can generate them on the fly
> whenever, provided you store events as they come.
> NN apps are all about aggregates. And they're not the only ones...

I think you are referring to a different application of aggregates than
I am. I've not done Neural net programming, so I missed this point. If
there are other example applications, how about listing two or three?
Or do you refer to just the BOM and supercession examples?

And judging from your comments bout NN being all about aggregates, you
seem to imply the aggregate values need to be stored rather than
generated "on the fly". Or I am misreading the context of your comment.

> ... well I ended using RDBMS as record manager:))
>
> Regards...

Well if you only need a record manager, just use a record manager, and
forget about a DBMS of any kind.

The relational model is not going away. We need a wide set of tools to
attack problems. So if a network model DB is best to solve your
application problems, then use it. If a flat file is best, use it. Or
if a relational Model DB is best use it! just because a flathead
screwdriver is more general than a phillips head screwdriver doesn't
mean you throw away your phillips head screwdriver. Use the right tool
for the job.

I'm learning some new things in this thread. So thanks all.

Ed

Josip Almasi

unread,
Jul 20, 2006, 1:17:53 PM7/20/06
to
Ed Prochak wrote:
>
> And a Network Model can fit some application purposes. The cases I've
> seen are where the queries against the data are relatively static (i.e.
> few if any adhoc queries). Then the network model can out perform
> relational basically because the indices are built into the entities.
> But Relational far out performs Network when you go "off track".
>
> So there can be applications for both models. there's just more
> applications that fit the Relational Model, IMHO. (And judging from the
> sales of Relational vs Network based DBMS products, I'd say a lot of
> people agree with me.)

Well that's an argument I can't argue against...:) Unless we see
filesystems as databases (DEF database: organized collection of data)
and operating systems as database management systems (DEF DBMS:
program(s) designed to manage database).
Basically hierarchical, but symlinks turn filesystems into networks.
Just, 100% market penetration, and we take it for granted:)

> But that is the point, you only ever just load data. It doesn't change.
> The sales invoices are not open work orders where the customer is
> changing the line items. Once the sale is complete then you send all
> the final details to the DW and it's done. It's static data.

OK then.

>>Right, IRL we mix models and metodologies to achieve better results.
>
> The different normal forms ARE NOT different data models. There are
> benefits and costs to going more or less normalized. At least the NFs
> guide the choices. What design guidelines exist for Network Model? How
> about for Object Oriented Model?

But there's a number of design patterns...
http://en.wikipedia.org/wiki/Design_pattern_%28computer_science%29

> Well thanks for agreeing that the subject is false. Relational is not
> dead, now or even long term

NP I'll agree again if it means so much to you;)

> Flexibility and solid foundation are not mutually exclusive. Large
> skyscrapers designed to sway in the wind, are built on bedrock
> foundations. Good software is built in analogous ways.

I have do disagree on this one. The ultimate reason - each model is
incomplete. And I can pull that to architecture too - cathedrals are not
built on any model and no model explains how they work.

> Yes. Note that I never said Relational was necessarily better for OOP.
> Or Neural network programming.

Oh I noted a while ago you're not fundamentalist:)
Neither am I, I'm just a practical guy.

>>But relational model itself isn't enough for a real world app. That's
>>why we have cursors after all.
>
> Huh? that doesn't make sense.
>
>>Most common example are substitutes; we have them from restaurants to
>>electronic stores:
>>BC108C is substitute for BC108B while BC180A is substitute for BC108C
>>etc etc, so we have self-referring entity.
>>How does sql query for 'show all substitutes for BC108B' look like?
>
> Bill of materials is the classic Hierarchical Model example and
> supercession is the classic network model example. Both have been
> implemented in Relation model databases. Dealing with supercession data
> is a pain even in network model DBs.

Well then maybe I'm simply used to traverse trees so I feel no pain when
I do it:) Unless I do it in rdbms that is.

>>Smells like networks are more general however. Each time we draw an ER
>>diagram we prove it;)
>
> Your sense of smell may be off.

LOL:))
In fact you're right, literally:)))
As a kid I fell on my nose and...:))))

> Network models can be better for data used in some applications that
> doesn't suit the relational model. But there are other applications
> whose data doesn't suit the network model. In fact I would conjecture
> that there is not set of application data that can be represented in
> one model that cannot also be represented in the other.

I'm not sure I understand this right. But in the meantime David and
Dmitry explained how one model can map the other so we can conclude this.

> It comes down
> to a question of flexibility and performance. And in general Relational
> Model implimentations win that battle.

Sure. But as I stated earlier, IMHO it's not up to model, it's due to
vast resources that have been spent on RDBMS research and development.

>>>Then you are storing a flat file model in in a RDBMS product, but you
>>>are not using the Relational Model.
>>
>>Exactly.
>
> So then why not use a flat file. You are gaining very little by forcing
> a poor design into a DBMS (relational or other).

Oh but I do use files:)
I simply noted that 'vrspace with mysql' was most popular topic on our
forums, and said 'heck customer is always right';)
I really gain nothing. But if I tell them they gain nothing they'll just
call me idiot and go elsewhere:)))

>>>I've done enough embedded programming to agree that event driven
>>>programming makes many things easier. I'm just unsure how it would
>>>apply to database design.
>>
>>In a sense it makes select sum(*) and count(*) etc obsolete - makes
>>aggregate generation easier (and much faster than triggers).
>
> Huh? again. Triggers respond to events, so how is some other event
> model going to be faster than triggers. I just really don't understand
> this point.

It's faster since RDBMS adds overhead like transaction logging.
When you do it in memory you get simple addition and nothing else.
This is too simplified of course but the difference is measured in
orders of magnitude.

>>Aggregates don't need integrity - you can generate them on the fly
>>whenever, provided you store events as they come.
>>NN apps are all about aggregates. And they're not the only ones...
>
> I think you are referring to a different application of aggregates than
> I am. I've not done Neural net programming, so I missed this point. If
> there are other example applications, how about listing two or three?
> Or do you refer to just the BOM and supercession examples?

Oh, no I really do have long and fruitfull programming practice:))
Though there's not much code I can show.
Good example of what I'm talking about is NeuroGrid,
http://www.neurogrid.net/
This impl is built on top of RDBMS. My alternative implementation uses
in-memory event model and is faster... uh I forgot but two orders of
magnitude.
My impl is in vrspace project cvs, but it's not integrated. I used it in
another proprietary project. Anyway, it's org.vrspace.neurogrid package,
http://sf.net/projects/vrspace
http://www.vrspace.org/
VRSpace itself is mix of N-dimensional address space and network DB on
top of either FS or RDBMS.

> And judging from your comments bout NN being all about aggregates, you
> seem to imply the aggregate values need to be stored rather than
> generated "on the fly". Or I am misreading the context of your comment.

Quite the opposite, aggregates need to be generated on the fly.
I.e. one neuron has up to 10000 synapses, and 'triggers' depending on
their values. The actual function implemented by this neuron may differ
significantly.
And, neuron and/or synapse states need to be persisted, otherwise NN
just dies when you shut it down.

> Well if you only need a record manager, just use a record manager, and
> forget about a DBMS of any kind.
>
> The relational model is not going away. We need a wide set of tools to
> attack problems. So if a network model DB is best to solve your
> application problems, then use it. If a flat file is best, use it. Or
> if a relational Model DB is best use it! just because a flathead
> screwdriver is more general than a phillips head screwdriver doesn't
> mean you throw away your phillips head screwdriver. Use the right tool
> for the job.

Agreed completelly.

Regards...

Josip Almasi

unread,
Jul 20, 2006, 1:30:30 PM7/20/06
to
David Portas wrote:
>
> What multidimensional model? Kimball popularised some methodology and
> some jargon under the Dimensional banner. Some people find such
> terminology useful but it doesn't change the data model. It is still
> relational or SQL. Do you think relational is something other than
> multi-dimensional?

You really gave me some food for thought with this:)
Yes, you're right. Only difference is normalization.
But software differs drastically, guess this is why I percieved these as
totally different things.

Thanks for correction.

> So your preferred model is no model at all. Noted.

Not exactly. My preferred model is the model I choose.
Just, I don't believe in universal truths:)
Like, we have a perfect model named 'theory of everything' that is
completelly useless for anything having more than 10 atoms...:)

> A recursive CTE. More generally speaking the query is just some
> restriction of a transitive closure (ie. it need not be defined
> recursively).

But needs to be iterated regardless, right?
If so, it's not relational.
Please explain; I just don't understand how it can be represented as set.

>>Smells like networks are more general however. Each time we draw an ER
>>diagram we prove it;)
>
> We prove no such thing.

Of course that's why I put that winkey there:))

Regards...

Ed Prochak

unread,
Jul 20, 2006, 2:12:30 PM7/20/06
to

just one point since otherwise you and I agree to be pragmatic.
Well actually two points

Josip Almasi wrote:
> Ed Prochak wrote:
> >

[]


> > Flexibility and solid foundation are not mutually exclusive. Large
> > skyscrapers designed to sway in the wind, are built on bedrock
> > foundations. Good software is built in analogous ways.
>
> I have do disagree on this one. The ultimate reason - each model is
> incomplete. And I can pull that to architecture too - cathedrals are not
> built on any model and no model explains how they work.

They were built by trial and error. Nothing says trial and error will
fail to find a solution, it is just not guaranteed to find an optimal
solution. I'd hate to see a skyscraper built with gothic cathedral
technology. Trial and error solutions usually don't scale well either.


[]


> >>Smells like networks are more general however. Each time we draw an ER
> >>diagram we prove it;)
> >
> > Your sense of smell may be off.
>
> LOL:))
> In fact you're right, literally:)))
> As a kid I fell on my nose and...:))))

LOL

>
> > It comes down
> > to a question of flexibility and performance. And in general Relational
> > Model implimentations win that battle.
>
> Sure. But as I stated earlier, IMHO it's not up to model, it's due to
> vast resources that have been spent on RDBMS research and development.

This is the one point I cannot let pass unchallenged.
When the Relational model was first being implemented into a DBMS
product, the Network Model was king. There were not vast resources
forcing the Relational Model onto the programming field. It was
practical software engineers that saw the advantages. from that grew
the behemouth that is now ORACLE. (at least that is what I understand
as the main source of "vast resources" that you mention). You are not
fighting ORACLE marketting droids in this discussion.

But maybe I misread your comment. Further detail is welcome.

Ed

David Portas

unread,
Jul 20, 2006, 5:37:00 PM7/20/06
to
Josip Almasi wrote:
> >
> > What multidimensional model? Kimball popularised some methodology and
> > some jargon under the Dimensional banner. Some people find such
> > terminology useful but it doesn't change the data model. It is still
> > relational or SQL. Do you think relational is something other than
> > multi-dimensional?
>
> You really gave me some food for thought with this:)
> Yes, you're right. Only difference is normalization.

Not even that. The Kimball-styled "Dimensional Model" operates only at
the logical level. Despite hype to the contrary it is orthogonal to
normalization, not the antithesis of it. Normalization is concerned
only with base relations whereas Kimball's ideas can be implemented
purely through views without disturbing NF at all. RK has done a lot of
ill by not better explaining the nature of his ideas. His works contain
plenty of confusion over logical versus physical.

> > A recursive CTE. More generally speaking the query is just some
> > restriction of a transitive closure (ie. it need not be defined
> > recursively).
>
> But needs to be iterated regardless, right?
> If so, it's not relational.
> Please explain; I just don't understand how it can be represented as set.

The relational model does not exclude iteration. RM says nothing about
how any particular operation is performed inside the DBMS. What matters
is that the operation can be logically defined using only relational
operators. The transitive closure of a graph is certainly a set. A
subtree within that graph is just a subset of it. In the case of a
hierarchy it is particularly easy to define a subset just by specifying
the root of a subtree.

[dropped comp.ai.neural-nets from my reply]

--
David Portas

David Cressey

unread,
Jul 21, 2006, 7:03:49 AM7/21/06
to

"David Portas" <REMOVE_BEFORE_R...@acm.org> wrote in message
news:1153431420....@m73g2000cwd.googlegroups.com...

> Josip Almasi wrote:
> > >
> > > What multidimensional model? Kimball popularised some methodology and
> > > some jargon under the Dimensional banner. Some people find such
> > > terminology useful but it doesn't change the data model. It is still
> > > relational or SQL. Do you think relational is something other than
> > > multi-dimensional?
> >
> > You really gave me some food for thought with this:)
> > Yes, you're right. Only difference is normalization.
>
> Not even that. The Kimball-styled "Dimensional Model" operates only at
> the logical level. Despite hype to the contrary it is orthogonal to
> normalization, not the antithesis of it.

Hear, hear!

Both can be useful, when used judiciously.

It's useful to distinguish between the dimensional model itself, and star
schema design.

> Normalization is concerned
> only with base relations whereas Kimball's ideas can be implemented
> purely through views without disturbing NF at all.

Or materialized in derived tables, kept current by ETL processing.

Josip Almasi

unread,
Jul 26, 2006, 9:51:22 AM7/26/06
to
Ed Prochak wrote:
>
>>Sure. But as I stated earlier, IMHO it's not up to model, it's due to
>>vast resources that have been spent on RDBMS research and development.
>
> This is the one point I cannot let pass unchallenged.
> When the Relational model was first being implemented into a DBMS
> product, the Network Model was king. There were not vast resources
> forcing the Relational Model onto the programming field. It was
> practical software engineers that saw the advantages. from that grew
> the behemouth that is now ORACLE. (at least that is what I understand
> as the main source of "vast resources" that you mention). You are not
> fighting ORACLE marketting droids in this discussion.
>
> But maybe I misread your comment. Further detail is welcome.

OK, then let's finish, techie part is over, no reason to crosspost further.

Vast resources from the above count in brain power rather than
brainwashing power. Endless engineer-hours spent on r&d etc etc.
Plus marketing of course.

Back in the day it wasn't Oracle but IBM who pushed the tech... IIRC all
these people (Codd, Boyce, Chamberlain... except Ellison;)) were with
IBM. [1]
BTW these days IBM had monopoly and had abused it, as was proven later.
And I know of NDB oldtimers still bitching about that:) Even calling
Codd idiot and everyone using RDBMS too:))))
(when both side fanatics call me idiot I know I'm right;))
And IMHO Network model wasn't that much of a king as you seem to think.
I.e. I had a chance to work on a PDP-11, it's RSX OS doesn't even have
directory trees, it's a 2d matrix:) Like, you cd 0,0 instead of cd /:)
I guess that designers thought of file system like file closet with
256x256 drawers for files:) Well it didn't have dir trees but it had
versioning... and integrated DBMS:) A record manager AFAIK.

But the bottom line is, it doesn't matter if Dmitry has better model,
since IBM can invest 1000 times more engineers to work on their
software. Even if Dmitry manages to make better product, IBM will simply
buy him off. As happened with informix for their red brick [2].
BTW these days I was in informix. Didn't work on dbms but with dbms,
learned some about their inner workings anyway.

And at the end, ibm or oracle, Dmitry or someone else, all the same.
See, David doesn't beat Goliat, it's a fairy tale;)

Regards...

[1]
http://www.research.ibm.com/resources/news/20030423_edgarpassaway.shtml

[2]
http://www-306.ibm.com/software/data/informix/redbrick/

It is loading more messages.
0 new messages