Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What about mistakes?

2 views
Skip to first unread message

Rick Sanderson

unread,
Dec 31, 1997, 3:00:00 AM12/31/97
to

Hi all,

I'm sure i'm gonna get riled over this, but here goes anyway....

As much as we'd like to think otherwise, programmers are not perfect. We
make mistakes. I don't mean coded bugs, but rather unforeseen design
mistakes which don't rear their head until months after production.

Given that....

I like the doctrine that says "change is made through extension". I like
the Open-Closed Principle, and LSP, too. In a perfect world, you'd add to
systems following these principles and all would be merry!

But what about a previously unforeseen discovery while analysing Version3
which has ramifications throughout the system. Class A, an otherwise sound
class and with many dependencies, contains a design flaw negating support
for Version3. We can talk about Visitors, or subclassing Class A's base, or
a number of other "elegant" solutions. But given the significant change,
the elegance will produce a large number of spin-off classes, and/or complex
relationships, to enable class A, and its derivatives, to support Version3.
This makes Version4 all the more difficult to produce later.

In such a case, would we not dig deep for the hacker in us all and change
ClassA, retest all dependencies, complete Version3, and then discard the
hacker with disgust?

It's just a question...

Happy New Year!

Rick.
rick...@cyberstreet.com

Tim Ottinger

unread,
Dec 31, 1997, 3:00:00 AM12/31/97
to

Rick Sanderson wrote:
> As much as we'd like to think otherwise, programmers are not perfect. We
> make mistakes. I don't mean coded bugs, but rather unforeseen design
> mistakes which don't rear their head until months after production.

No argument here.

> But what about a previously unforeseen discovery while analysing Version3
> which has ramifications throughout the system. Class A, an otherwise
> sound class and with many dependencies, contains a design flaw negating
> support for Version3.

One should suggest that the problem is that the system had such
a dependency structure that changes could propagate so. This
implies a very preventable design problem up front. Do you want
to talk about avoiding this first error, or coping with the
resultant (some may say inevitable) problem when the changes
occcur?

> We can talk about Visitors, or subclassing Class A's base, or
> a number of other "elegant" solutions. But given the significant change,
> the elegance will produce a large number of spin-off classes, and/or complex
> relationships, to enable class A, and its derivatives, to support Version3.
> This makes Version4 all the more difficult to produce later.

It's called "code rot". It's what happens when release 1 has a poor
dependency structure that is not corrected.

> In such a case, would we not dig deep for the hacker in us all and change
> ClassA, retest all dependencies, complete Version3, and then discard the
> hacker with disgust?
>
> It's just a question...

I would argue that the opposite is true: we should reach for the
principled designer in us all, and correct the initial problem(s).
This Principled Designer in flowing cape needs to tackle the
dependency problems and correct the design, retest the dependencies,
complete Version 3, and then hang on for the long run.

There is also Cautious Principled Designer who has rather stronger
ties to the immediate bottom line. This one may choose to plan
the changes, come up with a short-cut plan to get Rel 3 out, with
guarantees that he'll be given part of a cycle to fix at least
part of the problem. There should be an active plan to remove
one or more 'known disablers' (things that resist desirable change)
per release as part of a strategy to stay in business.

If you read Robert Martin's papers on principles of OOD, or
attend any of our classes, you'll hear this described further.
He's sort of a leader in dependency management, which is one
of the most overlooked and practical parts of the OO discipline.
--
Tim
+-----------------------------------------------------------+
| Tim Ottinger: http://www.oma.com/ottinger |
| Object Mentor: http://www.oma.com 800-338-6716 |
+-----------------------------------------------------------+
| Design, Consulting, Mentoring, Training |
+-----------------------------------------------------------+
The important thing is to never stop questioning - A Einstein

Patrick Logan

unread,
Jan 1, 1998, 3:00:00 AM1/1/98
to

Rick Sanderson <rick...@cyberstreet.com> wrote:

: In such a case, would we not dig deep for the hacker in us all and change


: ClassA, retest all dependencies, complete Version3, and then discard the
: hacker with disgust?

Yes, in a perfect world we would only have to *add* to our current system
to get the new functionality we desire.

But that does *not* mean _refactoring_ is *hacking*. Time constraints and
limits to knowledge force us to make best estimates of what is required
_now_ and hopefully the changes required _later_ will be manageable.

That's life. But it is *not* _hacking_. There are principles to
refactoring just as there are principles to design in the first place.
There is very little difference between designing the first release and
designing the second, except you have more information.

--
Patrick Logan (H) mailto:plo...@teleport.com
(W) mailto:patr...@gemstone.com
http://www.gemstone.com

Rick Sanderson

unread,
Jan 1, 1998, 3:00:00 AM1/1/98
to

Tim Ottinger wrote in message <34AB0E21...@oma.com>...
:Rick Sanderson wrote:
:> But what about a previously unforeseen discovery while analysing Version3


:> which has ramifications throughout the system. Class A, an otherwise
:> sound class and with many dependencies, contains a design flaw negating
:> support for Version3.
:
:One should suggest that the problem is that the system had such
:a dependency structure that changes could propagate so. This
:implies a very preventable design problem up front. Do you want
:to talk about avoiding this first error, or coping with the
:resultant (some may say inevitable) problem when the changes
:occcur?


We'd be fools not to seek methods to avoid design problems up front.
Hopefully, a past (recognized) mistake is never repeated, and we constantly
strive to learn techniques which help avoid other mistakes. But we are
still learning, and so mistakes are inevitable.

The goal is to achieve designs which conform to sound principles, which in
turn produce viable, stable, and upgradable applications. The question
posed is what happens when one discovers a design error which interferes
with the upgradable part.

:
:> We can talk about Visitors, or subclassing Class A's base, or


:> a number of other "elegant" solutions. But given the significant change,
:> the elegance will produce a large number of spin-off classes, and/or
complex
:> relationships, to enable class A, and its derivatives, to support
Version3.
:> This makes Version4 all the more difficult to produce later.
:
:It's called "code rot". It's what happens when release 1 has a poor
:dependency structure that is not corrected.

Not necessarily. Version1 and Version2 could be splendid apps... well
analyzed, well designed, and well implemented. But Version3 throws a wrench
into the works.

Elsewhere in this NG, there are ongoing discussions about how much study of
the domain is necessary prior to sitting down and designing the solution.
No matter how much study is done, there will always be unposed issues whose
relevancy to the current design can't possibly be considered.

In this case, issues raised by Version3 were not studied during the designs
of Version1 and Version2, and as a result, uncovers a flaw. Without the
specific unforeseen (and messy) requirement of Version3, the app could well
have continued, and its design could well have held up strong.

:
:> In such a case, would we not dig deep for the hacker in us all and change


:> ClassA, retest all dependencies, complete Version3, and then discard the
:> hacker with disgust?

:
:I would argue that the opposite is true: we should reach for the


:principled designer in us all, and correct the initial problem(s).
:This Principled Designer in flowing cape needs to tackle the
:dependency problems and correct the design, retest the dependencies,
:complete Version 3, and then hang on for the long run.

:
Your point is well taken. But I am more referring to a design error
introduced intially into an *important* class. The dependencies on that
class are fine. It is the class itself which is causing Version3 some
trouble.

:There is also Cautious Principled Designer who has rather stronger


:ties to the immediate bottom line. This one may choose to plan
:the changes, come up with a short-cut plan to get Rel 3 out, with
:guarantees that he'll be given part of a cycle to fix at least
:part of the problem. There should be an active plan to remove
:one or more 'known disablers' (things that resist desirable change)
:per release as part of a strategy to stay in business.

:

I suppose the original question could be rephrased:

Is it *hacking* when one alters the structure of an important class whose
only flaw was not anticipating an unforeseen domain requirement?

Rick


Tim Ottinger

unread,
Jan 1, 1998, 3:00:00 AM1/1/98
to

Rick Sanderson wrote:
> The question posed is what happens when one discovers a
> design error which interferes with the upgradable part.

Okay I'll try (mostly) to stick to this.

> :It's called "code rot". It's what happens when release 1 has a poor
> :dependency structure that is not corrected.
>
> Not necessarily. Version1 and Version2 could be splendid apps... well
> analyzed, well designed, and well implemented. But Version3 throws a wrench
> into the works.

The term "well designed" and the idea of a single change rippling
all over the code are not very reconciliable. But assume that it
was sufficiently well designed to meet the first two releases'
requirements.

> Elsewhere in this NG, there are ongoing discussions about how much study of
> the domain is necessary prior to sitting down and designing the solution.
> No matter how much study is done, there will always be unposed issues whose
> relevancy to the current design can't possibly be considered.

I'd like to pursue this further. I've a little doctrine of "design
by distrust" that really eliminates most of these problems. It's not
so much about how much analysis you do, but how much you trust the
analysis and the immediate requirements.

> Your point is well taken. But I am more referring to a design error
> introduced intially into an *important* class. The dependencies on that
> class are fine. It is the class itself which is causing Version3 some
> trouble.

Your use of the words "important class" are one of the things I want
to consider in the context I promised not to dive too deeply into.
Why not take a look at the paper on the dependency inversion principle
on our web page, or maybe some of the patterns literature for means
of decoupling classes, dependency-wise. A chunk of design thinking
is about dealing with "dependency collectors" in your program so
that changes can be made. But I promised to go light on this for
now.

> Is it *hacking* when one alters the structure of an important class whose
> only flaw was not anticipating an unforeseen domain requirement?

Not necessarily. If it's done to realign the class to solid
software principles (including, but not limited to, dependency
management), and it's done with careful considerations of
propogation, enablers, and disablers, that's an engineering
job called refactoring. If a maximum-expediency, brute force
approach is taken, then that can be characterized as hacking.

Hackery is really more about the lack of diligence than any one
activity.

Bert Bril

unread,
Jan 2, 1998, 3:00:00 AM1/2/98
to

Rick Sanderson wrote:

> I suppose the original question could be rephrased:
>

> Is it *hacking* when one alters the structure of an important class whose
> only flaw was not anticipating an unforeseen domain requirement?

I can't see why. It happens. You ask the users again and again: So, there is
always _exactly_ one XXX for each YYY? Period? Will there never be? Yes, yes,
yes. So, whereas you're designing with flexibility in mind, the effort to
support mechanisms that would allow 1-many relations is so big, you go for the
solution that cannot easily be changed from 1-1 (or many-1, whatever). And then,
a year later, you speak to the same users. Oh, yeah, now there's this new brand
of XXX that allows multiple YYY. It's great, we want it. %@#%$!

Is there _anyone_ out there that has never seen a situation like this one? So,
what do you do? If you have based an important ABC's interface on the 1-1
relation, you may be in deep trouble. And now I cannot see your statement: I
would say a hacker would try to circumvent the fundamental problem with some
tricks (hoping for the best), whereas the software engineer with the future in
mind will want to change the interface and fix/re-build the dependents of the
system so that the rest of the system is fully 'aware' of the 1-many relation.
Whether it's feasible to do that is of course another thing.

Maybe I understand the question incorrectly, but I'd say that _not_ altering the
structure would be a possible sign of hacking.


Bert


-- de Groot - Bril Earth Sciences B.V.
-- Boulevard 1945 - 24, 7511 AE Enschede, The Netherlands
-- mailto:be...@dgb.nl , http://www.dgb.nl
-- Tel: +31 534315155 , Fax: +31 534315104

Robert C. Martin

unread,
Jan 2, 1998, 3:00:00 AM1/2/98
to

Rick Sanderson wrote:

> I like the doctrine that says "change is made through extension". I like
> the Open-Closed Principle, and LSP, too. In a perfect world, you'd add to
> systems following these principles and all would be merry!
>

> But what about a previously unforeseen discovery while analysing Version3
> which has ramifications throughout the system.

The OCP (See Vlissides' quote in a different thread) must be employed
by engineers who understand the domain well enough to be able to
predict the kinds of changes that are likely to occurr. This is
guesswork; but hopefully the guesses are educated.

When you guess right, you win. When you guess wrong, you lose.
When you lose, you can't change through extension, and you have to
make changes to the core of the design. You will guess wrong from
time to time.

OOD does not solve the problem. All it lets us do is shift the odds
in our favor a little bit. To the extent that we see the future
clearly, we can use OOD to help us design a software structure that
will survive the changes wrought by the future with minimum impact.
--
**We are looking for good engineers, See our website
**for more information.

Robert C. Martin | Design Consulting | Training courses offered:
Object Mentor | rma...@oma.com | Object Oriented Design
14619 N Somerset Cr | Tel: (800) 338-6716 | C++
Green Oaks IL 60048 | Fax: (847) 918-1023 | http://www.oma.com

"One of the great commandments of science is:
'Mistrust arguments from authority.'" -- Carl Sagan

Robert C. Martin

unread,
Jan 2, 1998, 3:00:00 AM1/2/98
to

Rick Sanderson wrote:
>
> Tim Ottinger wrote in message <34AB0E21...@oma.com>...
> :
> :It's called "code rot". It's what happens when release 1 has a poor
> :dependency structure that is not corrected.
>
> Not necessarily. Version1 and Version2 could be splendid apps... well
> analyzed, well designed, and well implemented. But Version3 throws a wrench
> into the works.

If Version1 and Version2 were really well designed, then Version3
should not be able to throw the Spaniard in the works. It can be
argumed that the quality of a design is measured by the amount that
that it is impacted by changes.

Does this mean that if Version3 caused huge problems that the
designers of version1 and version2 were lacking? Yes. But that
shouldn't make them feel too bad. None of us can forsee everything
that is likely to happen. So none of us can produce perfect
designs.

What we *can* do is try to predict the most likely changes that will
occurr, (or better yet, the most likely *kinds* of changes) and use
the principles of OOD to protect ourselves from them.

The danger is that this can be taken to extreme, yeilding software
that is badly overengineered. So we need to strike a balance,
protecting ourselves from the changes we think are likely, and
accepting that we will sometimes miss.

Robert C. Martin

unread,
Jan 2, 1998, 3:00:00 AM1/2/98
to

Rick Sanderson wrote:
>
> Elsewhere in this NG, there are ongoing discussions about how much study of
> the domain is necessary prior to sitting down and designing the solution.
> No matter how much study is done, there will always be unposed issues whose
> relevancy to the current design can't possibly be considered.


Right. Study of the domain is important, but it is not the sole
driving force of the design. The reason for that is simply that
the domain will change. Thus, we need to study the 'meta-domain'.
This is an abstract domain that covers not only the problem domain,
but all related domain into which the problem domain might migrate.

Robert C. Martin

unread,
Jan 2, 1998, 3:00:00 AM1/2/98
to

Bert Bril wrote:
>
[nice anecdote about 1:N mapping elided]

>
> Maybe I understand the question incorrectly, but I'd say that _not_ altering the
> structure would be a possible sign of hacking.
>

I'd agree. When you finally discover that your original design is
insufficient to contain the changes that being imposed upon it,
it is time to change the design.

A hacker will circumvent the design in clever and complicated ways.
An engineer will determine how the design can be changed to tolerate
more than just the current change; but the family of changes that
it implies.

Dave Roberts

unread,
Jan 3, 1998, 3:00:00 AM1/3/98
to

On Thu, 01 Jan 1998 19:46:08 -0600, Tim Ottinger <tott...@oma.com>
wrote:

>The term "well designed" and the idea of a single change rippling
>all over the code are not very reconciliable. But assume that it
>was sufficiently well designed to meet the first two releases'
>requirements.

This is a bit too black and white for me. I get your point, but I
still think there are times when the best laid plans are changed at a
future date. If this impacts an abstract interface that was previously
well designed to accommodate the previous functionality will have to
be changed. This abstract interface may ripple far and wide. Of
course, we try to prevent this but it does happen. Suggesting that the
design was somehow flawed because it didn't take into account all
possible forms of change is a bit harsh.

I'll give you an example from one of Robert's recent posts dissecting
Elliott's lamp example. To quote Robert Martin:

"Now, actually, my favorite model in this chain is (3). I just
don't think that the domain of Lamps needs to be protected from
bizzare combinations of switch presses. I think that we can live with
lamps that turn on with the switch goes on, and turn off when the
switch goes off. So, unless one of my users convinces me that
bizarre switch presses are likely, I will go with (3)."

Okay, that's fine. Suddenly a requirement appears in rev 3 that allows
bizarre combinations of switch presses... Now what... ripple. Now, in
this example, the ripple won't go that far, but this was a simple
example. One can imagine a more complex app with a requirement foisted
on it in revision 3 that would similarly up-end a previous assumption
or simplification such as this.

-- Dave Roberts

Dave Roberts

unread,
Jan 3, 1998, 3:00:00 AM1/3/98
to

On Fri, 02 Jan 1998 12:17:02 -0600, "Robert C. Martin"
<rma...@oma.com> wrote:

>The danger is that this can be taken to extreme, yeilding software
>that is badly overengineered. So we need to strike a balance,
>protecting ourselves from the changes we think are likely, and
>accepting that we will sometimes miss.

Amen!

-- Dave Roberts

Tim Ottinger

unread,
Jan 3, 1998, 3:00:00 AM1/3/98
to

Dave Roberts wrote:
>
> Tim Ottinger <tott...@oma.com>wrote:
>
> >The term "well designed" and the idea of a single change rippling
> >all over the code are not very reconciliable. But assume that it
> >was sufficiently well designed to meet the first two releases'
> >requirements.
>
> This is a bit too black and white for me. I get your point, but I
> still think there are times when the best laid plans are changed at a
> future date.

There are a few cases where the "best laid plans" tend to fall
short, and here are three:
1) Don't consider all requirements to be laws of nature.

There are invariants (like the accounting equation that I
horribly misquoted earlier: Assets = Liabilities + Capital)
that cannot change. Then there are choices that were made
by the client and might have been made differently. These
may be reversed later. You should support the rules, but
also consider that the rules will change sometimes.

2) Only depend "outwards"

Don't build middle layers that depend on the UI or persistence
layers. It limites reusability, and also prevents you from
easily changing implementation technologies. There should be
a well-designed internal structure in your code that is pure.

3) Avoid dependency collectors

Things like "god classes" and switch..case statements, type
flags, and non-segregated mediator classes can collect lots
of dependencies, until your class diagram looks like a spider-
web: all lines pointing to the center. If you see a spiderweb
in your diagrams, you need to consider whether you can evict
the spider. The Interface Segregation Principle is one way.

If you don't consider these things, you greatly increase the
likelihood that you'll be caught off-guard by changes in the
near future.

> Okay, that's fine. Suddenly a requirement appears in rev 3 that allows
> bizarre combinations of switch presses... Now what... ripple. Now, in
> this example, the ripple won't go that far, but this was a simple
> example. One can imagine a more complex app with a requirement foisted
> on it in revision 3 that would similarly up-end a previous assumption
> or simplification such as this.

True, if dependency structures were such that changes were
not stopped at, say, an abstract client or abstract server
interface.

Take a logically three-tiered system that also has communication
via serialization over sockets and has to update two databases
with different schema. The best way to affect these kinds of apps
is usually to require new, different data to be tracked. This
requires change to the business model, obviously. It should affect
only one or two classes. You have to enter the data, and so there
should be an effect on a browser, possibily a menu, and certainly
a data entry screen. This change is in a new layer, but still
should be isolated. Now look at the persistence layers. Probably
both there is schema change in both databases, but only to the
parts where the business model's data is stored and nowhere else.
If a table was added, then you should be able to complete this
change by only adding code, and without editing much of anything.
In the ODBMS, it should be possible to only deal with the objects
that changed an nothing else. What about the socket stuff? If we
only send the keys, and allow the apps to look up the objects,
there is no real change at all. If we send all of the data, and
the new fields have to be sent, then we should still only edit
the builders for the classes that changed, and nothing else.

Here we had ripples across layers, but we did not have ripples
within any layers. The results are still estimable in terms
of quality and time.

No, we can't prevent all changes, but we should (by now) be
able to contain them, and predict the extent of the ripples that
they can change.

Rolf F. Katzenberger

unread,
Jan 11, 1998, 3:00:00 AM1/11/98
to

Tim Ottinger wrote:

> There are a few cases where the "best laid plans" tend to fall
> short, and here are three:

[snip]


> 2) Only depend "outwards"
>
> Don't build middle layers that depend on the UI or persistence
> layers. It limites reusability, and also prevents you from
> easily changing implementation technologies. There should be
> a well-designed internal structure in your code that is pure.

IMHO, persistence layers should provide an interface that is as simple
as two single methods like Save() and Load() for an object. Depending
on such a persistence layer prevents me from depending on the concrete
persistence _technology_ involved. In fact, i´m using this approach a
lot to stay independent from streams, ODBC and so on.

I´m very interested in your opinion about what is an optimal
"persistence layer".

Regards,
Rolf

--
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

______________________________________________________________________
Rolf F. Katzenberger / Software Developer 1997-01-02
See: http://www.geocities.com/SiliconValley/Park/9557
PGP: http://wwwkeys.pgp.net:11371/pks/lookup?op=get&search=0x3B39491F
(Fingerprint F1C0 3116 F6D4 DA33 E61D D2E4 2FB8 D6B6 3B39 491F)
-----BEGIN PGP SIGNATURE-----
Version: PGP for Personal Privacy 5.0
Charset: noconv

iQA/AwUBNKxKMi+41rY7OUkfEQIL5wCfWdaqsJXdvJPnRubYwoKDKsrdtRwAnArG
fvj3IMBX3mCWgv/4GzfTbDyI
=Ugqd
-----END PGP SIGNATURE-----

Tim Ottinger

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

Rolf F. Katzenberger wrote:
>
> IMHO, persistence layers should provide an interface that is as simple
> as two single methods like Save() and Load() for an object. Depending
> on such a persistence layer prevents me from depending on the concrete
> persistence _technology_ involved. In fact, i´m using this approach a
> lot to stay independent from streams, ODBC and so on.

If an object has a Save() method for saving to an Oracle RDBMS
with a given schema, it won't port without modification to a
a system using Gemstone. The Save() and Load() would have to
be further generalized so that the object had no dependence
upon the schema or database technology. Also, you always have
to have commit/rollback apis. The idea would be that your app
would have to know that there is a database, but only to manage
transactions (and initial setup).


> I´m very interested in your opinion about what is an optimal
> "persistence layer".

A good place to start is http://www.oma.com/ottinger/BakersDozen.html
where I started laying out what I think the ground rules are.

One of my biggest concerns is that almost every persistence
framework is developed to make the user dependent upon the
vendor of the framework. They're usually invasive, requiring
the code author to scatter their calls all throughout their
code, and often all through their business objects. I think
that it's because the industry is young enough that doing
persistence *at all* is a major concern, and then efficiency
after that. I don't think that the major vendors have considered
being non-invasive as an important or even desirable strategy,
and they don't think much about dependency structures.

The trick would be to see a database as "just another system
that has to be updated when things change" instead of seeing
it as the central structure of the application. You don't
want to think of the UI as "just another way to get data
into the database", but the other way 'round. The ideal
persistence layer would be just as useful for creating messages
and sending them to peer applications as it is for persisting
data in a database, and should allow many simultaneous external
systems.

If I manage to create the ideal persistence layer, I'm going
to give it away for free. I'm working on something that is
very promising, because the dependency structure is exactly
what I want. I think that code could be generated for it,
and that the code would be non-invasive, and I think that
it could be made to work for any kinds of databases, even
multiple databases concurrently, and would be available as
a mechanism that the application could use for non-persistence
needs (like a general purpose observer).

But I've not got it completely worked out (or I'd have posted
an announcement here).

Patrick Logan

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

Tim Ottinger <tott...@oma.com> wrote:

: If an object has a Save() method for saving to an Oracle RDBMS


: with a given schema, it won't port without modification to a
: a system using Gemstone.

The Enterprise Java Beans specification is beginning to address this issue
from a minimalist POV. The ODMG addresses the issue froma more elaborate
POV. It seems to me there is more momentum behind EJB than ODMG,
currently.

: One of my biggest concerns is that almost every persistence


: framework is developed to make the user dependent upon the
: vendor of the framework. They're usually invasive, requiring
: the code author to scatter their calls all throughout their
: code, and often all through their business objects.

My impression was that object databases strive to be non-invasive. Can you
site some specific examples that fail?

Tim Ottinger

unread,
Jan 12, 1998, 3:00:00 AM1/12/98
to

Patrick Logan wrote:
>
> My impression was that object databases strive to be non-invasive. Can you
> site some specific examples that fail?

If I have to derive from a vendor-specific class in order to
make my classes (sometimes) persistent (to that database),
I consider that to be invasive. It restricts the liklihood
of me taking my class an using it where only streaming to
flat files is used. Imagine writing a program for any given
ODBMS, and then using it exactly as-is where that database
is not used. For that matter, write an app that stores the
same objects into (heck, I don't know, say) ObjectStore and
also into a relational database. How much more work or rework
is really required?

Also, you have to consider how much rework there is when the
schema changes. If the schema changes (say, due to another
application that uses the database) requires rework in your
application, then the schema is somewhat invasive.

If it were non-invasive, then the key classes are not reworked
if I change storage technologies, or add additional storage
technologies. The business classes are exactly the same, but
the persistence layer changes in a way that does not propagate
to other layers.

It's my understanding that you can use ODBC layers to more-
or-less transparently change between RDBMS, and maybe the ODMG
standard will let you more-or-less quietly change between object
databases, but you're still dependent on the specific schema,
and on the specific storage technology at the very least. I
want business objects to be independent so I can use them in
many different applications that use many different technologies,
or several different ones at once (read from ODI's, write to
a flat file, to Gemstone, to two different RDBMSs). I had an
app where we had three data stores. One was an RDBMS, one
was an ODBMS (updated via hand-written programs and TCP messaging)
and the other was not a traditional database, but a cellular
switch. It was a real C++ app, used by real companies, really
deployed.

At the time we rejected almost every O/RM tool and ODBMS
until we realized that we were just going to have to do
it all ourselves. We had a fellow named Tony Ioanides who
helped to build a much less invasive model. It was nice
stuff. I'd been pressuring him to write up some papers on
it.

I haven't looked really lately, but does anyone give the
general mechanisms that really make this possible? If so,
I'll give up my pet research project and become someone
else's evangelist!

Kohler Markus

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to tott...@oma.com

Tim,
You might be interested in Aspect oriented programming.
See for example http://www.parc.xerox.com/spl/projects/aop/ for more
information.

From my point of view Aspect oriented tries to solve this problem not
only for databases
but also for communication issues and other similiar problems.

It would be nice for example if one could easily change an application
from a CORBA based protocol to a DCOM based solution or a home grown
protocol without touching the buisness classes.

Some people think that AOP be the next paradigm shift after OOP.

Markus

--
+----------------------------------------------------------------------------+
| Markus Kohler Hewlett-Packard
GmbH |
| Software Engineer OpenView Software
Division |
| IT/E Response
Team |
+----------------------------------------------------------------------------+

Andrew Hunt

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

On Mon, 12 Jan 1998 16:14:01 -0600, Tim Ottinger <tott...@oma.com> wrote:
>
> It's my understanding that you can use ODBC layers to more-
> or-less transparently change between RDBMS, and maybe the ODMG
> standard will let you more-or-less quietly change between object
> databases, but you're still dependent on the specific schema,

Sadly, the ODMG didn't seem to quite go far enough, at least not yet.
You can code to the base API that is supported across all
compliant ODBMS's, but only for toy applications. To get anything
done in a commercial environment with real live users you'll have
to dip into each vendor's proprietary API.

Schema definition itself is not consistent between any of the major
vendors (some require the ODMG's method of specifying a definition
separately, some glean it from the source code directly).

/\ndy


+------------------------------------------------------------------+
| Andrew Hunt | Object Oriented |
| Toolshed Technologies, Inc. | Software Design and Development |
| | in Unix environments |
| email: an...@toolshed.com | (SGI,DEC,Sun,Linux) |
+------------------------------------------------------------------+

Tim Ottinger

unread,
Jan 13, 1998, 3:00:00 AM1/13/98
to

Patrick Logan wrote:
>
> Tim Ottinger <tott...@oma.com> wrote:
>
> : If it were non-invasive, then the key classes are not reworked

> : if I change storage technologies, or add additional storage
> : technologies. The business classes are exactly the same, but
> : the persistence layer changes in a way that does not propagate
> : to other layers.
>
> As far as I know, this is achieved with...
>
> : ...ODI... [and]
> : ...Gemstone...
>
> ...and the other major OODBMSs. But I am only truly familiar with
> Gemstone. In these systems there really is no "persistence" layer at all,
> and the "schema" is simply the class definition itself.

Wow. I can change from Gemstone to an RDBMS without any changes to my
business classes at all? And no changes to a persistence layer?


> OTOH if you need to be able to include...
>
> : a flat file... [and] different RDBMSs...
>
> Then you definitely need to translate to/from a persistent representation.
> So you'd want a persistent "peer" of some kind for the "business objects".

I want my classes to know nothing about 'persistent representations'
if at all possible. If I have to write IDL instead of C++ or Java
or Python, then that's a form of invasion also.

I really can't imagine how it can be that you would be *that* non-
invasive. As far as I've known, any code written to use an ODBMS
had to avoid using template classes as members, usually had to
write IDL-like schema instead of C++, and usually had to use the
DB vendor's primitives sometimes instead of the containers that
they chose for their rep. I know that ODI was incredibly invasive,
insisting that we *not* use STL strings or containers, and having
(IIRC) special date and time classes for us to use among other
things. Have things come so far, so fast?


> If you read the Enterprise Java Beans [draft] specification, you will see
> described an architecture for accomodating all of these persistence
> mechanisms via hooks that can be used for translation to persistent
> formats if needed.

I'm really behind in learning Java stuff. I better get with
it. I'll see if I can locate and download that puppy soon.

Patrick Logan

unread,
Jan 14, 1998, 3:00:00 AM1/14/98
to

Tim Ottinger <tott...@oma.com> wrote:

: If it were non-invasive, then the key classes are not reworked
: if I change storage technologies, or add additional storage
: technologies. The business classes are exactly the same, but
: the persistence layer changes in a way that does not propagate
: to other layers.

As far as I know, this is achieved with...

: ...ODI... [and]
: ...Gemstone...

...and the other major OODBMSs. But I am only truly familiar with
Gemstone. In these systems there really is no "persistence" layer at all,
and the "schema" is simply the class definition itself.

OTOH if you need to be able to include...

: a flat file... [and] different RDBMSs...

Then you definitely need to translate to/from a persistent representation.
So you'd want a persistent "peer" of some kind for the "business objects".

If you read the Enterprise Java Beans [draft] specification, you will see


described an architecture for accomodating all of these persistence
mechanisms via hooks that can be used for translation to persistent
formats if needed.

--

Patrick Logan

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

Tim Ottinger <tott...@oma.com> wrote:

: Wow. I can change from Gemstone to an RDBMS without any changes to my


: business classes at all? And no changes to a persistence layer?

Of course not. You do not need a "persistence layer" with Gemstone, but
you do need one with an RDBMS, as I wrote later in the message. Now it is
possible that layer could be generated and kept independent of the domain
"layer".

: I want my classes to know nothing about 'persistent representations'
: if at all possible. If I have to write IDL instead of C++ or Java
: or Python, then that's a form of invasion also.

This is what most OODBMSs aim for. Gemstone has hit the mark pretty well.
Try the Guided Tour for Java. You do not need a "persistence layer" or an
IDL or a pre-processor even. What you do need to do is send the system a
"commitTransaction()" message when you want to make your changes
permanent.

: I really can't imagine how it can be that you would be *that* non-


: invasive. As far as I've known, any code written to use an ODBMS
: had to avoid using template classes as members, usually had to

: write IDL-like schema instead of C++...

C++ is a particularly difficult language for transparent persistence.
Gemstone supports Java and Smalltalk because their object models and
semantics are better for being "non-invasive". C and C++ can integrate
with applications running in Gemstone (via the Java or Smalltalk native
interfaces).

Tim Ottinger

unread,
Jan 15, 1998, 3:00:00 AM1/15/98
to

Patrick Logan wrote:

> you do need [a persistence layer] with an RDBMS, as I wrote later in

> the message. Now it is possible that layer could be generated and kept
> independent of the domain "layer".

This is the idea, and it's better if it's separated into a part of
the system that is independent of the domain "layer" and another
cooperating part which is independent of the database.

> This is what most OODBMSs aim for. Gemstone has hit the mark pretty well.
> Try the Guided Tour for Java. You do not need a "persistence layer" or an
> IDL or a pre-processor even.

Ahhhhh... that does sound really good.

> What you do need to do is send the system a "commitTransaction()"
> message when you want to make your changes permanent.

If you can tap into this and take actions the same way when a
commitTransaction() is called (i.e. navigate through the changed
objects and take whatever actions you like) then you have in
your hand (or on your disk; whatever) exactly what I want!

> C++ is a particularly difficult language for transparent persistence.

Yeah, no self-description. Maybe the trick would be to demand self-
description of C++ objects via a library, maybe have them create a
'description' structure that gives the names, types, and values of
the state variables. Then you can parse the description and do whatever
you like. The hard part is that there isn't a central collection of
C++ objects, like a GC system, you might navigate to see changes.
If those two were solved, then you could do almost do the same trick,
eh?

Thanks for sticking with this. I guess I'm going to have to go
visit your site. If I can afford ($$ and space) a copy to play
with for a few months (for personal reasons, no project and no
sponsor), I think I could really get to like gemstone.

You represented it very well. Tell your boss you deserve a raise.

--

Kohler Markus

unread,
Jan 16, 1998, 3:00:00 AM1/16/98
to

Tim Ottinger wrote:
>
> Patrick Logan wrote:
>
> > you do need [a persistence layer] with an RDBMS, as I wrote later in
> > the message. Now it is possible that layer could be generated and kept
> > independent of the domain "layer".
>
> This is the idea, and it's better if it's separated into a part of
> the system that is independent of the domain "layer" and another
> cooperating part which is independent of the database.
>
> > This is what most OODBMSs aim for. Gemstone has hit the mark pretty well.
> > Try the Guided Tour for Java. You do not need a "persistence layer" or an
> > IDL or a pre-processor even.
>
> Ahhhhh... that does sound really good.
>
> > What you do need to do is send the system a "commitTransaction()"
> > message when you want to make your changes permanent.
>
> If you can tap into this and take actions the same way when a
> commitTransaction() is called (i.e. navigate through the changed
> objects and take whatever actions you like) then you have in
> your hand (or on your disk; whatever) exactly what I want!
>
> > C++ is a particularly difficult language for transparent persistence.

... and for transparent distributed objects, and for self describing
components ( like Java's Beans) and, and .... ;-)

>
> Yeah, no self-description. Maybe the trick would be to demand self-
> description of C++ objects via a library, maybe have them create a
> 'description' structure that gives the names, types, and values of
> the state variables. Then you can parse the description and do whatever
> you like. The hard part is that there isn't a central collection of
> C++ objects, like a GC system, you might navigate to see changes.
> If those two were solved, then you could do almost do the same trick,
> eh?
>
> Thanks for sticking with this. I guess I'm going to have to go
> visit your site. If I can afford ($$ and space) a copy to play
> with for a few months (for personal reasons, no project and no
> sponsor), I think I could really get to like gemstone.

You can get free trial CD-ROM from Gemstone with a temporary license
valid for about 1 month.

>
> You represented it very well. Tell your boss you deserve a raise.
>

Rolf F. Katzenberger

unread,
Jan 17, 1998, 3:00:00 AM1/17/98
to

Tim Ottinger wrote:

> Rolf F. Katzenberger wrote:
> >
> > IMHO, persistence layers should provide an interface that is as simple
> > as two single methods like Save() and Load() for an object. Depending
> > on such a persistence layer prevents me from depending on the concrete
> > persistence _technology_ involved. In fact, i´m using this approach a
> > lot to stay independent from streams, ODBC and so on.
>
> If an object has a Save() method for saving to an Oracle RDBMS
> with a given schema, it won't port without modification to a
> a system using Gemstone. The Save() and Load() would have to
> be further generalized so that the object had no dependence
> upon the schema or database technology. Also, you always have
> to have commit/rollback apis. The idea would be that your app
> would have to know that there is a database, but only to manage
> transactions (and initial setup).

Let me elaborate a little on that. The basic idea I´m exploiting is a
solution based on the Memento pattern. My Memento is basically a
container of properties (does that word remind you of something?;)),
which are encapsulations of basic data types. All Properties have a
common base class. The Memento collaborates with a Storage object,
which is in return capable of storing and loading properties of any
type. Storage is an abstract base class. I write derived classes for
each kind of database, stream, or other storage medium.

You might think of the memento object as a snapshot taken from the
"real" object in memory. The "real" object´s Save() and Load() methods
are in fact methods to create a memento or to read data from a
memento, respectively. Then, saving an object looks basically like
(method names changed for clarity):

MyClass myPersistentObject;
ODBCStorage storage( objectDatabase );
// ...

myPersistentObject.GetMemento().Save( storage );


Loading an object basically looks like (method names changed for
clarity):

MyClass myPersistentObject;
ODBCStorage storage( objectDatabase );
// ...

myPersistentObject.SetFromMemento( storage.Load(
myPersistentObject.GetID() ) );


I admit it looks a little weird and is not lightning fast, but it
isolates my business objects completely from any specific persistence
technology. Currently, I am working on a satisfactory solution for the
member objects problem (at the moment, Mementos are actually a list of
linked mementos, and storages are actually a list of storages, but
building these lists is not that handy as you can imagine).


> > I´m very interested in your opinion about what is an optimal
> > "persistence layer".
>
> A good place to start is http://www.oma.com/ottinger/BakersDozen.html
> where I started laying out what I think the ground rules are.

Thank you for that pointer, I´ll have a look at it.


Regards,
Rolf

--
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

______________________________________________________________________
Rolf F. Katzenberger / Software Developer 1998-01-15


See: http://www.geocities.com/SiliconValley/Park/9557
PGP: http://wwwkeys.pgp.net:11371/pks/lookup?op=get&search=0x3B39491F
(Fingerprint F1C0 3116 F6D4 DA33 E61D D2E4 2FB8 D6B6 3B39 491F)
-----BEGIN PGP SIGNATURE-----

Version: PGPfreeware 5.0i for non-commercial use
Charset: noconv

iQA/AwUBNL3pES+41rY7OUkfEQL1CQCfR7rtQkaHGeFNz0VlNxPZjYd+59MAnRDV
lhHySB72bDTXhu/MJwcw6M86
=oApR
-----END PGP SIGNATURE-----


0 new messages