Appreciate advice:
Can I say as a simplification, that one of the first benefits of OO is
that it allows for "normalization" of methods -- similar to
logical-data-modelling's (ie. LDM's) normalisation of data?
Let me explain:
In LDM, data is normalised to 3rd normal form (3NF) where there will
be no duplication of data items in the various data entities.
Similarly, for OO, when we analyse the objects and draw them into a
class diagram and use GRASP (or other similar methods) to assign
methods to relevant classes, we are sort of also "normalising" the
methods so that algorithms only appear in one place in the whole
system -- ie. there is no duplication. This automatically makes all
algorithms reuseable as "subroutines".
Eg. In functional decomposition, we could have two processes/modules
to develop in a system - say, "buy book" and "catalog book". When we
do the functional decomposition, we may end up with duplicate
algorithms for ValidateISBNCheckDigit in each module and we have to
realise that there are such duplicates before we can move them out
into a common subroutine.
In the case of OO, we would have naturally put ValidateISBNCheckDigit
as a method into a suitable class (eg. "Book") the first time we did
an analysis for the first module - say, "buy book". When we do OO
analysis of the 2nd module - "Catalog Book", the
ValidateISBNCheckDigit method would again naturally go to the same
class. Creation and reuse of a "subroutine" would come about
naturally.
Does the above sound valid as a sort of simplification to one aspect
of OO?
> I am looking into how to introduce OO to a Structured-Method Person
> (ie. IT professional who is new to OO but who has done things via
> structured or functional decomposition method).
Use lots of diplomacy. :-)
Apart jokes, it really depends on the person.
Generally, I'd say avoid making it a religious issue, but show concrete
examples of how and when an OO approach is superior; show examples on when
it's equivalent; show also its weaknesses.
The pitfall with similarities like the one you employ is that often they
have to be carried all the way to "make sense" to the other person; and
even then, they make sense in a mathematical way, and that's not the best
way for everybody. If the similarity is fuzzy, you risk ending up with
someone that misses the point but thinks he's caught *you* in error, and
comes up with an "aha!".
But before I can reach that state with a person with many years
experience with Structured Methods, need to first have him see the
benefit of OO in ways he is already familiar with -- hence, was
thinking of trying some analogy to subroutines and data normalisation.
After he is convinced and willing to learn OO, then hoping it would be
easier to get him to learn and apply it.
Sometimes a person new to IT work will find it easier to learn OO than
one who already knows other methodologies (eg. Structured) which the
person constantly tries to compare and contrast with the OO concept
that is new to him.
Universe <> wrote in message news:<tt5uo0tbcs8uf3b0u...@4ax.com>...
> Yes, but it is akin for many projects to prematurely doing performance
> tuning--attempted before requirements task logic has been completed.
>
> I suggest that *modelling*, or the same thing *abstracting* -
> eliminating things irrelevant to the requirements and representing or
> enhancing things relevant to project requirements - the project domain
> and requirements should be the starting point of virtually *all*
> software development.
>
> The essence of OO sw engineering is reducing complexity by modelling -
> abstracting* - major requirements relevant role responsibilities, in
> the domain and in the requirements, as a network of collaborating
> object modules.
>
> "Object-oriented [OO] programming. A program execution
> is regarded as a physical model, simulating the behavior
> of either a real or imaginary part of the world...The
> notion of a physical model should be taken literally."
> ~ Computerized physical models
> by Madsen, Moeller-Pederson, Nygaard (co-founder of OO)
>
> Similarly the essence of Structured sw engineering is reducing
> complexity by modelling - abstracting - major requirements relevant
> data handling processes and entities, in the domain and in the
> requirements, as a network of collaborating event modules.
>
> There's more to 'em obviously, different secondary things, like "avoid
> goto" for Structured - but these are the core skeletal definitions.
>
> Dependency management (DM), has 2 critical success factors (CSF):
> ` minimally loose inter-module coupling
> ` maximally higher intra-module cohesion
> DM has a cornucopia of techniques for achieving its CSF's, like:
> - localizing declaration and definition code
> - Law of Demeter (for *both* Structured and OO)
> - minimizing code duplication
> - minimizing interface size
> However applying these DM techniques should be secondary to creating
> the overall system modelling framework, and DM should serve and
> strengthen the operation of the processing model.
>
> "We will introduce concepts such as information processes and systems
> and discuss abstraction, concepts, classification and composition. The
> framework provides a means to be used when modeling phenomena and
> concepts from the real world, it is thus a fundamental basis for
> object-oriented analysis and design. It is, however, also important
> for implementation, since it provides a means for structuring and
> understanding the objects and patterns generated by a program
> execution. The approach presented in this book is that analysis,
> design and implementation are programming or modeling activities, but
> at different abstraction levels."
> ~ Object-Oriented Programming in the BETA Programming Language
> Ole Lehrmann Madsen, Birger Møller-Pedersen, Kristen Nygaard
>
> [Nygaard is one of the 2 (1st among equals, imo) co-creators of the OO
> sw paradigm.]
>
> Of course, many multi-layered software system architectures often use
> different modelling paradigms for various layers. I.e. the domain and
> requirements layers may have OO modelling, while the data layer has
> Chen Entity/Relational modelling.
>
> To repeat, I think its best to begin with a focus on
> modelling/abstraction to reduce complexity in general, with objects
> as the key, or core, granule of system abstraction.
>
> Modelling in mainstream OO sw engineering has multiple overlapping
> functions. Modelling is the basis for understanding the domain and
> use case requirements. And fortunately, a key thing that makes OO
> modelling a big win, the OO analysis models - class, sequence, state,
> etc - can and should serve as the core skeleton of the system's high
> level - user interface, domain model, and requirements model -
> physical design. Just as Nygaard et al state in the last excerpt.
>
> This is the way OO's *seamless* design is realized. This means key
> analysis object model entities should be preserved as the skeleton for
> the system's high level physical design.
>
> Elliott
> I am looking into how to introduce OO to a Structured-Method Person
> (ie. IT professional who is new to OO but who has done things via
> structured or functional decomposition method).
Hire a mentor whose business it is to train people in OO techniques.
That is not meant facetiously. The OO approach is inherently
incompatible with the SA/SD/SP approaches of traditional procedural
development so the first thing the convert needs to do is forget
everything they ever knew about SA/SD/SP. Yet they will quite naturally
be looking for familiar patterns (after all, it is just software
development, right?) so it will be very tempting for them to map
SA/SD/SP onto OO development. This is especially tempting since the
OOPLs are 3GLs and the /look/ very procedural so it looks like
procedural techniques can be directly mapped.
To overcome that procedural mapping one needs somebody who is already
used to dealing with it. IOW, it needs to be anticipated and the mentor
needs to be skilled in recognizing when it occurs, which is not always
easy to see. The sheer complexity of OO development also presents a
daunting training task that is best left to an expert. For example, ...
>
> Appreciate advice:
> Can I say as a simplification, that one of the first benefits of OO is
> that it allows for "normalization" of methods -- similar to
> logical-data-modelling's (ie. LDM's) normalisation of data?
The static views in OOA/D use the same set theory as the relational data
model and one normalizes a Class Diagram in pretty much the same way.
But a Class Diagram is not a Data Model and there are significant
differences in the semantics as well as the way it is constructed.
Learning those differences is an important step on the Path to OO
Enlightenment, so I would be cautious about citing that as a benefit in
the sense that it is something that is transferable.
I'm not sure if you mean first in order or first in importance when you
say, "first benefits". If you meant first in importance, then I would
disagree. Normalization is really a pretty peripheral tool in the OO
scheme and the benefits of OO are primarily focused on improved
maintainability and a better bridging of the gap between the customer's
problem space and the developer's computing space.
>
> Let me explain:
> In LDM, data is normalised to 3rd normal form (3NF) where there will
> be no duplication of data items in the various data entities.
>
> Similarly, for OO, when we analyse the objects and draw them into a
> class diagram and use GRASP (or other similar methods) to assign
> methods to relevant classes, we are sort of also "normalising" the
> methods so that algorithms only appear in one place in the whole
> system -- ie. there is no duplication. This automatically makes all
> algorithms reuseable as "subroutines".
I'm afraid I don't buy this. The normalization in a class diagram is
focused on eliminating redundancy. (It just applies the same RDM
guidelines to behavior properties that a class diagram or data model
apply to knowledge properties.) That makes maintenance easier because
changes only need to be made in one place. It also reduces
implementation effort at the OOP level. However, it really has very
little to do with reuse. Reuse is enabled by problem space abstraction,
encapsulation, implementation hiding, and the use of disciplined,
generic interfaces.
[As a practical matter, one rarely needs to normalize a Class Diagram.
That's because the nature of OO abstraction is to abstract intrinsic
properties of entities in the problem space. Customers aren't idiots
and they have already reduced redundancy in their environment. So when
one abstracts entities from the problem space, the customer has already
isolated functional responsibilities. IME, 3NF violations in a Class
Diagram are more often due to the developer not properly understanding
the problem space than they are due to the problem space itself being
redundant. So it is not very often that reviewers get to jump on 3NF
violations.]
*************
There is nothing wrong with me that could
not be cured by a capful of Drano.
H. S. Lahman
h...@pathfindermda.com
Pathfinder Solutions -- Put MDA to Work
http://www.pathfindermda.com
blog (under constr): http://pathfinderpeople.blogs.com/hslahman
(888)-OOA-PATH
Are you suggesting reuse via shared subroutines is not "natural"
outside of OO? (True, there are a lot of bad developers
that don't factor into functions, but there are bad developers
in ANY paradigm.)
Further, I don't see why "catalog book" would necessarily
need to validate ISBN. That is not a function of
"catalog", at least as I interpret your writing.
As an OOP skeptic, I have yet to see "killer code
proof" that OOP is better (for a domain similar to mine),
dispite many pleas. In fact, OO fans never seem to agree on
why OO is allegedly better. Some see it being about
better (alleged) code structure, and others much more
about an internal way of thinking about the world.
Inconsistency in the OO world is a huge wall to OO newbies
(and oldbies).
My advice to the person being retrained is: "Go with the
flow. OOP may or may not be better or logical, but it is
where the paychecks are. Wait until you get your foot in the
door, and then ignore OO if you don't find it useful. Otherwise,
buck-up play the game."
>
> Does the above sound valid as a sort of simplification to one aspect
> of OO?
> I am looking into how to introduce OO to a Structured-Method Person
> (ie. IT professional who is new to OO but who has done things via
> structured or functional decomposition method).
>
> Appreciate advice:
> Can I say as a simplification, that one of the first benefits of OO is
> that it allows for "normalization" of methods -- similar to
> logical-data-modelling's (ie. LDM's) normalisation of data?
That's pretty good. You can take this in a few ways as well:
1) Every method in OO is a subroutine. You can write better code if you
have lots of little subroutines rather than a few big ones (he knows this).
But subroutines are a PITA. OO removes the PITA part of it. So you can have
*lots* of *little* methods (subroutines) without the pain.
2) OO allows you to pass "data" as subroutine parameters, but this can get
carried away...ultimately every data item in the entire system becomes a
parameter. Just as other languages allow you to "place" data in structured
form or in "fixed" places so you don't have to pass everything around, OO
does this with objects. Objects "hold" data. So the methods can come to the
data rather than the data coming to the methods.
3) So have objects which hold data and we place our subroutines in methods
in these objects. So we pass some data around in method parameters and
other data "lives at home" within the object that has the method. This is
both the benefit of OO and its hell. The hell is that learning how to
gather/align your data and your subroutines into the appropriate "common"
objects is extremely difficult. Its what good OO is all about. Done poorly
(and you will do it poorly for the first year or so :-), you will have
worse spagetti than if you didn't use objects at all.
4) Finally you can talk about inheritance, where some different types of
objects have commonalities and thus we can shift that commonality into
either a superclass or delegate it to a helper class. I'd also emphasize
that inheritance is always overused at first and will cause as many (more)
problems as it alleviates
(IMHO, an experienced programmer should understand all this intuitively,
i.e., qualitatively.)
This is not necessarily true. If there is no duplication, often there
is little reason to make subroutines out of some size mantra alone.
If you want to make things into digestable units, then put comments
above stuff. Example:
//---- Wake Up ----
foo = bar + adsfsdf()
blah go ga do dad
zarg(aasdf, sddfsdf, sdf, asdf, asdf)
//---- Shower ----
if (asdfsdf() > sfssdf) {
foo = bar + adsfsdf()
blah go ga do dad
zarg(aasdf, sddfsdf, sdf, asdf, asdf)
}
//---- Shave ----
foo = bar + adsfsdf()
blah go ga do dad
zarg(aasdf, sddfsdf, sdf, asdf, asdf)
//---- Get Dressed ----
....
Such code is highly readable to me, and provides K.I.S.S.
> But subroutines are a PITA. OO removes the PITA part of it. So you can have
> *lots* of *little* methods (subroutines) without the pain.
>
> 2) OO allows you to pass "data" as subroutine parameters, but this can get
> carried away...ultimately every data item in the entire system becomes a
> parameter. Just as other languages allow you to "place" data in structured
> form or in "fixed" places so you don't have to pass everything around, OO
> does this with objects. Objects "hold" data. So the methods can come to the
> data rather than the data coming to the methods.
Use map arrays, or better yet, tables. With tables you can
move to a database if you later need to without little or
no extra coding and get all the benefits a database offers
such as:
# Persistence
# Query languages or query ability (see DatabaseVerbs)
# metadata repository
# State management
# Multi-user contention management and concurrency (locks,
transactions, rollbacks, etc.)
# Backup and replication of data
# Access security
# Data computation/processing (such as aggregation and
cross-referencing)
# Data rule enforcement or validation
# Data export and import utilities
# Multi-language and multi-application data sharing
# Data change and access logging
The OO approach will often require major code overhauls to
get each of these features.
>
> 3) So have objects which hold data and we place our subroutines in methods
> in these objects. So we pass some data around in method parameters and
> other data "lives at home" within the object that has the method. This is
> both the benefit of OO and its hell. The hell is that learning how to
> gather/align your data and your subroutines into the appropriate "common"
> objects is extremely difficult. Its what good OO is all about. Done poorly
> (and you will do it poorly for the first year or so :-), you will have
> worse spagetti than if you didn't use objects at all.
The problem is that there is often not a one-to-one relationship
between operations and nouns and/or data-structures. Even if
there is a strong relationship today, it may disappear tomarrow,
but OOP code makes it hard to uncouple this relationship.
I see little value to tight association. OO over-exaggerates the
value of such coupling. It is mantra, not logic, that OO
uses to justify it. A 1:1 relationship is phoney in the
real world and thus phoney in code most of the time.
>
> 4) Finally you can talk about inheritance, where some different types of
> objects have commonalities and thus we can shift that commonality into
> either a superclass or delegate it to a helper class. I'd also emphasize
> that inheritance is always overused at first and will cause as many (more)
> problems as it alleviates
The problem is that over the long run "variations" are often not
tree-shaped. *Set theory is superior* to tree-based subtyping
for this kind of thing, but OOP does not have set-theory built in.
Trees don't reflect the change pattern of real-world things
nearly as well as sets. OO textbooks exaggerate tree-ness
in the real world. Time to dump trees for serious real-world
modeling.
>
> (IMHO, an experienced programmer should understand all this intuitively,
> i.e., qualitatively.)
...until one realizes that the real world does not reflect OO
mantra. Intuitively the world is flat. But when you look around
and study the bigger picture, you realize that flatness is a
falsehoold and you grow up. Face it people, trees
are for babies, sets are for grown-ups. And 1:1 noun-to-verbs?
I don't know who that false doctrine is for.
Keep in mind that *relevancy is relative*. There often is no
single "right" global viewpoint. Different departments might
want different viewpoints of the same info. Relational modeling is
better able to deal with this than OOP in my opinion because
one does not hard-wire a single viewpoint around domain
nouns as they often do in OO modeling. A viewpoint in
relational usage tends to be local to needs, not global.
This better reflects the modern, complex real world.
>
> "Object-oriented [OO] programming. A program execution
> is regarded as a physical model, simulating the behavior
> of either a real or imaginary part of the world...The
> notion of a physical model should be taken literally."
> ~ Computerized physical models
> by Madsen, Moeller-Pederson, Nygaard (co-founder of OO)
Nygaard was mostly a physical modeller by trade. The problem
is that computers can *transcend physical reality*.
For example modern book indexing systems
can index things by multiple viewpoints (categories)
that the Dewey Decimal hierarchical placement system could not handle.
We cannot order physical books by multiple viewpoints, but
we can with virtual books. OO philosophy is often
stuck in the 60's in this regard. It has stuck to the
physical world model instead of transcending it
using what computers *can* do if given the chance. It can be said
OO is the Plato view of the world while relational is the
Einstein view of it.
>I am looking into how to introduce OO to a Structured-Method Person
>(ie. IT professional who is new to OO but who has done things via
>structured or functional decomposition method).
[snip]
>Does the above sound valid as a sort of simplification to one aspect
>of OO?
Not really. It sounds too academic.
The benefit of OO is simply that it allows you far more control over
the coupling between modules. Using the tools of OO you can decouple
modules far more than using the structured methods.
Many people talk about modeling the real world. I agree that it is
important to draw names of classes and methods from the domain. I
agree that a domain model is a good thing to have. However, users of
the structured methods have never had a problem creating domain models
or using names from the domain.
The unique benefit of OO is in managing dependencies between modules.
Abstraction and dynamic polymorphism allow a degree of decoupling that
no structured method could ever allow.
-----
Robert C. Martin (Uncle Bob) | email: uncl...@objectmentor.com
Object Mentor Inc. | blog: www.butunclebob.com
The Agile Transition Experts | web: www.objectmentor.com
800-338-6716
"The aim of science is not to open the door to infinite wisdom,
but to set a limit to infinite error."
-- Bertolt Brecht, Life of Galileo
>Hire a mentor whose business it is to train people in OO techniques.
Agreed!
> The unique benefit of OO is in managing dependencies between modules.
> Abstraction and dynamic polymorphism allow a degree of decoupling that
> no structured method could ever allow.
One way to look at it would be to say that the benefit is in being able
to *remove* yourself from the real world by abstracting from it.