Re: [UML Forum] Data-Centric vs. Object-Oriented

370 views
Skip to first unread message

H. S. Lahman

unread,
Mar 31, 2013, 12:27:38 PM3/31/13
to umlf...@googlegroups.com
Responding to George...

> this isn't really a UML question but it has to do with modeling
> methodology. The vendors for OMG DDS DCPS implementations promote
> "data-centric" architecture as meeting the needs of a loosely coupled
> distributed system better than OO architectures.

LOL. In the Early Days of OO development, one of the popular criticisms
of the OO paradigm was that it is too data centric.

>
> A key contrast between DCPS and OO architecture, such as CORBA, is
> that DCPS does not have explicit facilities for object-to-object
> signalling. Instead, object properties are published on a "bus" and
> all activity is driven by data changes on that bus. The claim is that
> this approach forces low coupling because object only ever react to
> data in their environment. Objects never send "I'm done" signals. A
> change in their properties implicitly means the object "is done" with
> something. By convention, objects should only publish cohesive
> property sets. This sounds great, but, is it true that behavior
> responsibilities really always result in changes to knowledge attributes?

No. Consider an object or subsystem that has attributes for Volume and
Density and has a responsibility for providing Mass. The only way you
can avoid a 3NF violation on the data is by computing Mass when needed.
Computing Mass changes the state of the system, but to prove the state
of the system was changed correctly (or even changed at all!), there
must be a message issued by some behavior with a new Mass value.

<Resident Curmudgeon Hot Button>
FWIW, I think comparing DDS to an elephant like CORBA as representative
of OO architecture is specious. CORBA is, very charitably, object-based,
not object oriented. In addition, it caters to a form of
interoperability that is terrible OO development practice. Things like
RPCs and remote object access break encapsulation and bleed cohesion, by
definition. IMO, OO developers who use that sort of stuff should have
their thumbs broken. All distributed messaging should be via simple,
by-value data transfer messages between subsystem/system interfaces.
That ensures minimum coupling and guarantees vastly better performance
than pigs like CORBA.

DDS is just doing what good OO developers have been doing since the
'80s. In any distributed context the "I'm done" message carries all the
relevant data, by value, and the sender has no clue whatsoever about who
receives it, much less what receiver objects will process the message
and what they will do with it. That decoupling is the whole point of
"I'm Done" messages. However, DDS has complicated that by introducing
the publish/subscribe model, which adds coupling because to subscribe
the sender needs to know what the publisher provides! How can these guys
claim they have minimized coupling when they do that?

I also have a problem with the overhead of publish/subscribe. I can see
that for IT data warehousing middleware, but I don't see it as a general
distributed model because of that overhead (though it is a lot better
than silliness like remote object access).

And don't get me started on IT's love affair with ASCII for
cross-platform portability. Do you know if DDS handles that sensibly or
with ASCII?
</Resident Curmudgeon Hot Button>


--
Life is the only flaw in an otherwise perfect nonexistence
-- Schopenhauer

Imagine how much more difficult physics would be if electrons had feelings
-- Richard Feynman

Rene Descartes went into a bar. The bartender asked if he would like a drink. Descartes said, "I think not," and disappeared.

H. S. Lahman
H.la...@verizon.net
software blog: http://pathfinderpeople.blogs.com/hslahman/index.html
software book: Model Based Development, Addison-Wesley, 2011
geology book: The Evolution and Utilization of Marine Resources, MIT Press, 1972

Thomas Mercer-Hursh, Ph.D.

unread,
Apr 1, 2013, 12:40:12 PM4/1/13
to umlf...@googlegroups.com
On 3/31/2013 11:27 AM, H. S. Lahman wrote:
> However, DDS has complicated that
> by introducing the publish/subscribe model, which adds coupling
> because to subscribe the sender needs to know what the publisher
> provides! How can these guys claim they have minimized coupling when
> they do that?

Obviously, pub/sub is sometimes done that way, but it doesn't have to
be. At its best, one publishes and subscribe to a queue which is
messages about a particular topic. There is no knowledge whether anyone
is listening or what the source might be.

H. S. Lahman

unread,
Apr 1, 2013, 1:59:53 PM4/1/13
to umlf...@googlegroups.com
Responding to George...

I think the idea is that any time an object changes its properties, associated objects will automatically get notification. The observed object need not have actions such as the following snippet in a Shipment state model:

this.Delivered = true;
<Ref>Order order = FIND this->A1;
GENERATE Order:orderDelivered() TO order;

Instead, A1 would imply that the order object will get change notification. The action language would only update a flag:

this.Delivered = true;

The change to Shipment:Delivered would cause the order object to get an ShipmentChanged signal by virtue of the A1 association.

I feel like the DDS way makes the inherent coupling less explicit. I want inherent coupling to be explicit and non-essential to be coupling removed. Order and Shipment are coupled and changing the way that is declared doesn't change the fact.

I agree. There is still an "I'm Done" message -- it's the notification (I'm Done [updating this.Delivered]). Basically DDS is just providing an infrastructure for automating those I'm Done messages.

The problem is that DDS needs to know that A1 is being navigated. It does that by providing the publish/subscribe infrastructure elsewhere behind the scenes. However, that has to be set up somehow; the developer must somehow indicate the notification for this.Delivered being updated must go to Order. You could do that in an AAL via explicit syntax like

DELIVERED: this.Delivered = true;
<Ref>Order order = FIND this->A1;
NOTIFY order DELIVERED

Now the syntax provides all the same information that would be needed for setting up a DDS subscription and you are not invoking a setter or behavior of Order. So you have all the same infrastructure for delivering the notification without the publish/subscribe mechanism. Now take it one step further...

DELIVERED: this.Delivered = true;
<Ref>Order order = FIND this->A1;
GENERATE Order:Notify(DELIVERED) TO order;

But now we have essentially come full circle to your example. My basic point here is that the only difference between a conventional OO I'm Done event message and a DDS notification is where the message target is defined. Otherwise they are are completely equivalent.

However, I prefer the OO approach for three reasons. One is your preference: the routing of the notification is explicit. I think that routing is a critical element of the solution (i.e., flow of control) and I want to see it at the same level of abstraction as object actions, attributes,  and relationship navigation. The second reason is that this will have a performance advantage over publish/subscribe because there is no indirection through table look-ups, etc.

The third reason is that I think the data centric idea underlying  DDS is flawed. The premise is that actions are only triggered when data is modified.  That's fine for very simple situations where all the data needed by an action is modified in the same place at the same time. But when the relevant data is updated at different times and in different places, things start getting really nasty for data integrity. You can deal with that in the infrastructure, but it gets complicated, which means more overhead. It also gets complicated when specifying the subscriptions. In the OO I'm Done approach you define when an action should execute by placing the event generation in the right action. You do that the same way everywhere with the same level of complexity.

The alternative, of course, is to design the application differently so that actions don't need to operate on data that is updated at different times and in different places. I don't know enough about designing around DDS to say for sure, but I am very suspicious that does not work well in large applications, especially for maintainability. Our computers are Turing machines where sequence is critically important. We can raise the level of abstraction to functions and subsystems, but we still have sequences of operations at every level and it it useful to organize those sequences functionally, which is orthogonal to the way data is organized. Functional Programming takes this to an extreme and such programs are notoriously difficult to maintain in large systems. IMO, the other extreme, pure data processing around DDS, would probably be just as bad because customers define requirements in terms for functionality, not data updates. I believe the OO paradigm represents a good compromise between data processing and functional organization. (Note that in OOA/D, a major benefit of the paradigm is that one can trade off data vs. behavior when solving problems quite easily.)

I think one example of why the data-triggering approach may be a problem is demonstrated in designing state machines. One can use DbC to rigorously determine where to place event generation. The precondition for executing a state action has to match the postcondition for some other action. Therefore you place the event generation in the action whose postcondition matches. The tricky part is that the postcondition includes both algorithmic sequence and data integrity (i.e., that all the data the action needs is timely). I don't see an equivalent technique for guaranteeing that behaviors get executed at the right time in a pure data-triggering approach without extremely complex subscription rules in the worst case. That's because the subscription rules are specified statically, not dynamically.


I would like to put DDS DCPS in the category of transport technology subsystem where it would not affect the application subsystem model. It would only affect how I translate the model into an executable system. The GENERATE statement in the AAL doesn't mean I cannot use DCPS mechanisms to make the system behave according to the model. However, when I use explicit message sending in the application subsystem model, my colleagues call a foul.

I think it is more complicated than that for the reasons above. The programs would have to be designed differently to avoid data integrity problems that would turn every application into a morass of concurrent processing issues.

H. S. Lahman

unread,
Apr 1, 2013, 3:04:59 PM4/1/13
to umlf...@googlegroups.com
Responding to Wright...


 However, DDS has complicated that by introducing
the publish/subscribe model, which adds coupling because to subscribe
the sender needs to know what the publisher provides! How can these guys
claim they have minimized coupling when they do that?

If not with the provider, with who?

The short answer for OO I'm Done messages is Nobody. (More precisely, the developer.)

The best way to see that is to look at how object state machines work because they enforce good encapsulation. When a state action sends an event, the action's behavior does not know who the receiver is, why it is going to that receiver, or what the receiver will do, if anything, in response. As far as the sender is concerned, all the event does is announce that it has done something. Similarly, the receiver's behavior does not know where the event came from or why it was sent. Thus the sender's and receiver's behaviors are functionally completely decoupled as far as what they know about each other. The reason is because the rules of finite state automata stipulate that a state action cannot know what the previous state was or what the next state will be.

<aside>
The AALs often confuse this because the relationship navigation to a specific object is included in the sender action's code. However, that code is really part of the event generation. It is effectively what the DDS publish/subscribe infrastructure does in routing the message. One difference is that the sender only navigates a relationship to whatever is on the other end of the relationship without knowing anything about it. That is quite different than publish/subscribe where the subscriber must hard-wire what data it needs in the subscription. (The publisher also hard-wires what data it publishes, but that is trivial.)

The real issue with coupling is what either the sender's behavior must know about the receiver or what the receiver's behavior must know about the sender. But with OO event generation, neither behavior knows anything about the other object. One way that is demonstrated is that a good technique for OOA/D is to encode all the object behaviors without any event generation at all. If one can do that, then one can be quite sure one has no dependencies on external behaviors in those actions. Then, in an entirely separate pass, one can use DbC to determine where the events that trigger each transition should be generated and insert the event generations in the correct actions. Corollary: one should always be able to place all the event generation at the end of the action (possibly with an IF for conditional generation).
</aside>

Having said all that, my original statement was somewhat misleading. In fact, OO objects do have a form of subscription. It is a fundamental notion in OOA/D that behaviors go and get the data that they need rather than having it passed to them. Thus OO events almost never have data packets. (The only times an OO message should have any data is for snapshots where the data needs to be absolutely correlated in time, like samples from a sensor array, or when the message is going to another subsystem or system.) Thus when a state action responds to an event, it uses synchronous services to access any attributes in other objects. Those synchronous services are essentially a subscription mechanism without the overhead.

However, I would argue that the coupling is quite different. The responding action knows what data it needs already, so it really doesn't know anything more about its correspondents. The only coupling is that it has is to know where the data that it needs is, which is the same as behavior messages; the developer specifies what relationships to navigate. The most important difference, from a maintenance perspective, is that the navigation is self-contained. If the responding action needs different data due to a requirements change, nothing outside the action needs to be touched. In the formal publish/subscribe mechanism, the code the defines the subscription has to be modified and it is someplace else than the responding action. That alone, says that the publish/subscription has unnecessary coupling.

H. S. Lahman

unread,
Apr 1, 2013, 3:25:47 PM4/1/13
to umlf...@googlegroups.com
Respo0nding to Mercer-Hursh...
Sure. But my problem (other than a lot of overhead!) is on the
subscriber side. The subscriber has to explicitly define what data it
needs. It also has to provide enough information so that the
registration process knows which publisher to match (e.g., which one of
247 publishers with a 'color' attribute does the subscriber really want.)

However, as I indicated in my response to Wright, that isn't really the
issue because object synchronous services to access data do the same
thing. One issue is George's, that it is hidden. The other is that the
subscription specification is indirected through the publish/subscribe
Manager (or whatever they call it). I think those two issues combine to
be a nasty maintainability problem because of the coupling through the
third party Manager.

Thomas Mercer-Hursh, Ph.D.

unread,
Apr 1, 2013, 4:46:40 PM4/1/13
to umlf...@googlegroups.com
On 4/1/2013 2:25 PM, H. S. Lahman wrote:
> Sure. But my problem (other than a lot of overhead!) is on the
> subscriber side. The subscriber has to explicitly define what data it
> needs. It also has to provide enough information so that the
> registration process knows which publisher to match (e.g., which one
> of 247 publishers with a 'color' attribute does the subscriber really
> want.)

Well, obviously there has to be some agreement about message identity
and content or it would be difficult to communicate, but I think one can
get pretty close to the publisher announcing that something has happened
with the relevant data and subscribers can look at the list of possible
messages and decide which ones they are interested in. That can be
quite loose.

> The other is
> that the subscription specification is indirected through the
> publish/subscribe Manager (or whatever they call it). I think those
> two issues combine to be a nasty maintainability problem because of
> the coupling through the third party Manager.

I don't see the coupling. Instead, I see a good separation of
responsibility. The manager knows nothing about message content or even
about the identity beyond that it currently has a message with a
particular identity in the system. What it does know about is the
topics and queues, i.e., the plumbing responsible for receiving and
passing along messages to interested parties ... not to mention added
value services like failover.


H. S. Lahman

unread,
Apr 2, 2013, 12:09:32 PM4/2/13
to umlf...@googlegroups.com
Responding to Mercer-Hursh...

Sure. But my problem (other  than a lot of overhead!) is on the
> subscriber side. The subscriber has to explicitly define what data it
> needs. It also has to provide enough information so that the
> registration process knows which publisher to match (e.g., which one
> of 247 publishers with a 'color' attribute does the subscriber really
> want.)

Well, obviously there has to be some agreement about message identity and content or it would be difficult to communicate, but I think one can get pretty close to the publisher announcing that something has happened with the relevant data and subscribers can look at the list of possible messages and decide which ones they are interested in.  That can be quite loose.

Sure, you can optimize the efficiency, but there is still indirection overhead for each message and there is the registration and de-registration overhead compared to direct OO relationship navigation. But my primary issue was with the assertion that somehow coupling was reduced compared to OOA/D...



The other is
> that the subscription specification is indirected through the
> publish/subscribe Manager (or whatever they call it). I think those
> two issues combine to be a nasty maintainability problem because of
> the coupling through the third party Manager.

I don't see the coupling.  Instead, I see a good separation of responsibility.  The manager knows nothing about message content or even about the identity beyond that it currently has a message with a particular identity in the system.   What it does know about is the topics and queues, i.e., the plumbing responsible for receiving and passing along messages to interested parties ... not to mention added value services like failover.

Au contraire! That manager has to know a lot about content. The problem is the registration. It is a matter of knowing who publishes what data and who subscribes to what data. When you insert a third party (other than the message sender and receiver) to analyze and match subscriber and publisher, that third party now has an intimate knowledge of what both sender and receiver know. That breaks encapsulation, regardless of whether it is hidden or not. You can optimize the actual message passing by doing a look-up and passing the data packet as-is. But to define that look-up the manager has to have intimate knowledge of the objects.

There is also the issue of sequence. If there are five actions that are interested in a particular data value being changed, it is likely that there are algorithmic issues that limit which one of them cares in a particular solution context. Therefore, the context information also has to be included in the subscription, which is far more information that is involved in OO relationship navigation. (Unless you want to completely change the way you design software so that exactly the same set of actions always respond to any given data change, which I don't think is going to work any better than Functional Programming insofar as maintainability is concerned. One of the things the OO paradigm has going for it is that the developer can play off data and behavior against each other to provide more maintainable solutions.)

I see inserting a third party between objects as being a bad idea when the third party must know about them intimately. (The Observer pattern is sufficiently useful in some problem space contexts to outweigh the maintainability issues, but that is a case-by-case issue, not a general design approach.) That breaks peer-to-peer interaction and KISS, which are both very important to maintainable software. If there is to be coupling, it wants to be among a minimum number of entities.

But I further assert that when an object behavior navigates a relationship to get the data it needs, that coupling is certainly no worse than publisher/subscriber and it is probably a lot less because the behavior only needs to encode the identity of the source and where to find it. It is the developer who determines which source actually had the data by defining static relationships. Better yet, everything about that navigation is completely contained within the behavior. If requirements change and the behavior needs different data, then nothing needs to be touched except the behavior itself. That is almost the definition of minimal coupling.

Note that we've been here before concerning the way the GoF patterns are implemented. When the GoF delegate out behaviors from the Context into other objects (e.g., the Strategy pattern), their example implementations actually do it incorrectly because the Client object still accesses the Context object and the Context object redirects to the delegated Strategy object. That's a no-no in OOA/D because it breaks peer-to-peer interaction and effectively doubles whatever coupling there is. Once the behavior has been delegated to the Strategy object, the Client should navigate relationships through the Context object and direct its messages directly to the Strategy object. The obvious way such poor coupling is manifested is that if requirements change so that the message data packet needs to provide different information, then all three objects require surgery. But if the Client talked directly to Strategy, only the client and Strategy objects would require surgery.

Rafael Chaves

unread,
Apr 2, 2013, 3:02:00 PM4/2/13
to umlf...@googlegroups.com

On Tue, Apr 2, 2013 at 9:09 AM, H. S. Lahman <h.la...@verizon.net> wrote:
I don't see the coupling.  Instead, I see a good separation of responsibility.  The manager knows nothing about message content or even about the identity beyond that it currently has a message with a particular identity in the system.   What it does know about is the topics and queues, i.e., the plumbing responsible for receiving and passing along messages to interested parties ... not to mention added value services like failover.

Au contraire! That manager has to know a lot about content. The problem is the registration. It is a matter of knowing who publishes what data and who subscribes to what data. When you insert a third party (other than the message sender and receiver) to analyze and match subscriber and publisher, that third party now has an intimate knowledge of what both sender and receiver know. That breaks encapsulation, regardless of whether it is hidden or not. You can optimize the actual message passing by doing a look-up and passing the data packet as-is. But to define that look-up the manager has to have intimate knowledge of the objects.

One mechanism used to address this issue keeping publishers/subscribers decoupled is dynamic discovery via domain-specific service metadata. OSGi, UDDI and Jini are concrete examples that come to mind. That allows, for instance, a program to send a document to the color printer John Doe has access to that is closest to his office without having knowledge of what printers are actually available.

Cheers,

Rafael

H. S. Lahman

unread,
Apr 3, 2013, 11:09:21 AM4/3/13
to umlf...@googlegroups.com
Responding to Wright...


The best way to see that is to look at how object state machines work because they enforce good encapsulation. When a state action sends an event, the action's behavior does not know who the receiver is, why it is going to that receiver, or what the receiver will do, if anything, in response. As far as the sender is concerned, all the event does is announce that it has done something. Similarly, the receiver's behavior does not know where the event came from or why it was sent. Thus the sender's and receiver's behaviors are functionally completely decoupled as far as what they know about each other. The reason is because the rules of finite state automata stipulate that a state action cannot know what the previous state was or what the next state will be.

I think we may be arguing slightly at cross-purposes, because I've been working with a primitive tool that expects publish/subscribe to be used in what are really inter-domain bridges but which (because of tool limitations) need to be written by hand; the publisher is typically an I/O domain, the subscriber an application domain, and the subscriber's response to the "target acquired at <position> with <velocity>" message from the Radar is to post the appropriate events to its own domain objects.

Yes, I would agree. Such bridges are something else entirely. IIRC, my original post identified data warehousing as pne example where it might be useful. I still have a big problem of holding up CORBA as an example of good OO practice in those situations to argue publish/subscribe has less coupling. B-)

H. S. Lahman

unread,
Apr 3, 2013, 12:04:42 PM4/3/13
to umlf...@googlegroups.com
Responding to Chaves...

I don't see the coupling.  Instead, I see a good separation of responsibility.  The manager knows nothing about message content or even about the identity beyond that it currently has a message with a particular identity in the system.   What it does know about is the topics and queues, i.e., the plumbing responsible for receiving and passing along messages to interested parties ... not to mention added value services like failover.

Au contraire! That manager has to know a lot about content. The problem is the registration. It is a matter of knowing who publishes what data and who subscribes to what data. When you insert a third party (other than the message sender and receiver) to analyze and match subscriber and publisher, that third party now has an intimate knowledge of what both sender and receiver know. That breaks encapsulation, regardless of whether it is hidden or not. You can optimize the actual message passing by doing a look-up and passing the data packet as-is. But to define that look-up the manager has to have intimate knowledge of the objects.

One mechanism used to address this issue keeping publishers/subscribers decoupled is dynamic discovery via domain-specific service metadata. OSGi, UDDI and Jini are concrete examples that come to mind. That allows, for instance, a program to send a document to the color printer John Doe has access to that is closest to his office without having knowledge of what printers are actually available.

I agree, that sort of generic backplane processing is a good example of where publisher/subscriber has merit. But, as Wright pointed out, that is really a bridging issue for interprocess communications, not OOA/D for designing applications, which was the OP's original context. (We then got side tracked when CORBA was introduced as example of good OOA/D practice vis a vis coupling, which it is definitely not.)

I would point out that somebody has to figure out which printer is closest to John Doe. That's effectively a buffer application that still needs to know who the document generators are, where the printers are, and where the people are. IOW, it has its own suite of unique business requirements related to interprocess communication that are independent of document generators, printers, or people. The publish/subscribe then applies to that intermediary application's design, not document generators or printers. The document generators, people, and printers simply interact with that application as another process with no knowledge of publish/subscribe. Thus it is that application that implements publish/subscribe to meet additional requirements (e.g., optimizing distance to John Doe)  independently of the interactions between document generators and printers. Therefore coupling between document generators and printers is irrelevant.

Which comes back to the point that I tried to make: from an OO standpoint inserting buffers between clients and services is generally not a good idea because the added complexity breeds maintainability issues. OTOH, there are situations where it is useful, such as managing document generators and printers as they come and go, where the additional requirements come into play or the benefits outweigh the maintainability issues. That is especially true when one can provide rigorous standards to contain and limit the maintainability issues. But it is still a special situation relative to good OO practice.

Thomas Mercer-Hursh, Ph.D.

unread,
Apr 3, 2013, 3:44:01 PM4/3/13
to umlf...@googlegroups.com
On 4/2/2013 11:09 AM, H. S. Lahman wrote:
> Sure, you can optimize the efficiency, but there is still indirection
> overhead for each message and there is the registration and
> de-registration overhead compared to direct OO relationship
> navigation.

Yes, there is some overhead, but, as you have said yourself, there is
some overhead with OO. The value one receives for that overhead is
maintainability and flexibility.

> Au contraire! That manager has to know a lot about content. The
> problem is the registration. It is a matter of knowing who publishes
> what data and who subscribes to what data. When you insert a third
> party (other than the message sender and receiver) to analyze and
> match subscriber and publisher, that third party now has an intimate
> knowledge of what both sender and receiver know. That breaks
> encapsulation, regardless of whether it is hidden or not. You can
> optimize the actual message passing by doing a look-up and passing
> the data packet as-is. But to define that look-up the manager has to
> have intimate knowledge of the objects.

This is certainly not true of any pub/sub system I've worked with. The
system knows about queues and topics. One agent publishes a message to
a queue and another agent is subscribed to that queue. The communication
system doesn't know anything about the contents of the message, only the
identification and the queue and how to reliably deliver the message.
It knows nothing about how the message is produced or consumed.
Frankly, I can't see how anything can be divided into more than one
component and have them less coupled.

> There is also the issue of sequence. If there are five actions that
> are interested in a particular data value being changed, it is likely
> that there are algorithmic issues that limit which one of them cares
> in a particular solution context. Therefore, the context information
> also has to be included in the subscription, which is far more
> information that is involved in OO relationship navigation. (Unless
> you want to completely change the way you design software so that
> exactly the same set of actions /always /respond to any given data
> change, which I don't think is going to work any better than
> Functional Programming insofar as maintainability is concerned. One
> of the things the OO paradigm has going for it is that the developer
> can play off data and behavior against each other to provide more
> maintainable solutions.)

Again, this is not true in any way of the ESB systems I know. The
pub/sub mechanism itself is unaware of content. Yes, with modern ESBs,
it is possible to put a part of the solution on the bus itself. I.e.,
instead of A publishing a message which is received by B and then B
deciding whether it needs to go to C or D (and doing no other work than
making that decision), one can put a component on the bus which will
make the decision, but that is simply making the message distribution
more efficient. It doesn't mean that the system is examining the
contents of every message.

> I see inserting a third party between objects as being a bad idea
> when the third party must know about them intimately.

But, it doesn't, which rather changes the conclusion, no?

> But I further assert that when an object behavior navigates a
> relationship to get the data it needs, that coupling is certainly no
> worse than publisher/subscriber and it is probably a lot less because
> the behavior only needs to encode the identity of the source and
> where to find it. It is the developer who determines which source
> actually had the data by defining static relationships. Better yet,
> everything about that navigation is completely contained within the
> behavior. If requirements change and the behavior needs different
> data, then nothing needs to be touched except the behavior itself.
> That is almost the definition of minimal coupling.

If one starts with a component that receives an order and then passes
that order to a component which processes the order for shipping and
communicates the order over the bus, then one can add another component
which notifies purchasing so that it is aware of demand levels and
another component that notifies management of sales volume, and another
component that builds transportation plans for delivering the order,
etc., etc. with touching the original relationship or having either of
the original components be aware that the additions have happened.
THAT, to me, is loose coupling.

> The obvious way such poor coupling is manifested is that if
> requirements change so that the message data packet needs to provide
> different information, then all three objects require surgery. But if
> the Client talked directly to Strategy, only the client and Strategy
> objects would require surgery.

And in the order example above, if a new component requires some new
data from the component which first receives the order, one modifies the
source component to add data to the message and the new component then
uses it, but all other components simply ignore the additional data.

Thomas (who wishes he had tag lines as good as yours!)

H. S. Lahman

unread,
Apr 4, 2013, 11:55:15 AM4/4/13
to umlf...@googlegroups.com
Responding to Mercer-Hursh...

>
>> Sure, you can optimize the efficiency, but there is still indirection
> > overhead for each message and there is the registration and
> > de-registration overhead compared to direct OO relationship
> > navigation.
>
> Yes, there is some overhead, but, as you have said yourself, there is
> some overhead with OO. The value one receives for that overhead is
> maintainability and flexibility.

Right, but the direct OO communication has less overhead.

>
>> Au contraire! That manager has to know a lot about content. The
> > problem is the registration. It is a matter of knowing who publishes
> > what data and who subscribes to what data. When you insert a third
> > party (other than the message sender and receiver) to analyze and
> > match subscriber and publisher, that third party now has an intimate
> > knowledge of what both sender and receiver know. That breaks
> > encapsulation, regardless of whether it is hidden or not. You can
> > optimize the actual message passing by doing a look-up and passing
> > the data packet as-is. But to define that look-up the manager has to
> > have intimate knowledge of the objects.
>
> This is certainly not true of any pub/sub system I've worked with. The
> system knows about queues and topics. One agent publishes a message
> to a queue and another agent is subscribed to that queue. The
> communication system doesn't know anything about the contents of the
> message, only the identification and the queue and how to reliably
> deliver the message. It knows nothing about how the message is
> produced or consumed. Frankly, I can't see how anything can be
> divided into more than one component and have them less coupled.

Let's try this a different way. A topic is just a shortcut. It says, "I
publish this pile of very particular data called X (topic) and I put it
in a very specific place (queue)." The subscriber still needs to know
what that pile of data actually is in order to subscribe to the proper
queue. It is the developer who happens to know that when the
subscription is somehow hard coded in terms of the shortcut names.
That's exactly what happens in an OO context when the developer hard
codes relationship navigation to the right object for attribute access.
Thus the source object is analogous to the queue, the relationship
navigation specification is analogous to the subscription, and the data
access is analogous to decoding the topic. The only semantic difference
is that in the OO case the access is peer-to-peer while in the
publish/subscribe case there is an intermediary.

More important, one has exactly the same problem during maintenance when
the pile of data associated with the topic changes. Now you potentially
have to perform surgery (e.g., add new queues and topics or navigate to
different objects) with the registration mechanism because some existing
subscribers may still want the old pile of X data. You have to perform
that surgery on the intermediary as well as the publishers and
subscribers. That's because there is coupling not just between publisher
and subscriber, but between publisher and intermediary and between
intermediary and subscriber. The coupling is basically the same,
regardless of the identity and transfer mechanisms. But the amount of
coupling and the number of parties coupled is increased in the
publish/subscribe infrastructure.

Generally you do not want that extra coupling. However, it becomes
viable when you have additional requirements, such as subscribers and
publishers coming and going dynamically or the need to optimize distance
in the document/printer/person example. Now the intermediary is
justified because it is resolving those new requirements that neither
publisher or subscriber care about. In that case, you treat the
intermediary just like any other external process, subsystem,
application, or whatever and you reduce the peer-to-peer coupling with it.

Thomas Mercer-Hursh, Ph.D.

unread,
Apr 4, 2013, 1:04:16 PM4/4/13
to umlf...@googlegroups.com, H. S. Lahman
On 4/4/2013 10:55 AM, H. S. Lahman wrote:
> Right, but the direct OO communication has less overhead.

Which is sensible in those places where tight coupling is appropriate,
but when loose coupling has an advantage, the small overhead is more
than justified by the advantages.

> Let's try this a different way. A topic is just a shortcut. It says,
> "I publish this pile of very particular data called X (topic) and I
> put it in a very specific place (queue)."

Well, no. A queue is a channel that delivers a message to a single end
point and a topic is a channel that delivers to multiple endpoints. The
topic is typically a category of related messages, like a TV channel
with a particular theme. The publisher publishes a message to that
channel because all messages about a certain subject are published
there. The subscriber listens to that channel because it is interested
in messages on that subject. The publisher and subscriber choose the
channel, not the MOM. The MOM only manages the channels. Only with
Content-Based Routing, found in ESBs, does the MOM even look at the
content and then only of specified messages and only the content needed
for the routing decision. As I described before, this is an alternative
to using an external routing service to make the decision.

> The only semantic difference is that in the OO case the access is
> peer-to-peer while in the publish/subscribe case there is an
> intermediary.

Actually, they are entirely different.

> More important, one has exactly the same problem during maintenance
> when the pile of data associated with the topic changes. Now you
> potentially have to perform surgery (e.g., add new queues and topics
> or navigate to different objects) with the registration mechanism
> because some existing subscribers may still want the old pile of X
> data.

Most ESB messages are XML (which I know you despise, but let's not get
distracted by that) so if a new or revised consumer needs a new piece of
data than is contained in an existing message, one typically just
modifies the supplier to add that data and that specific consumer to use
it. All other consumers simply ignore the additional data. Only when
you change the structure of the message, e.g., send quantity and unit
price instead of total price, does one have to modify all consumers.

> You have to perform that surgery on the intermediary as well as
> the publishers and subscribers.

No change is required to the MOM except when creating new channels for
new types of information.


H. S. Lahman

unread,
Apr 5, 2013, 12:23:27 PM4/5/13
to umlf...@googlegroups.com
Responding to George...

Great debate!

Mercer-Hursh and I have been going at it for years offline. If it was hardcopy, vast forests would have been felled.


I want to suggest that a concrete example of a loosely coupled system is a system that must be extended w/o re-deployment of the original components. I'm using "loosely coupled" to help make requirements distinct from the design concept of "loose coupling." A loosely coupled system just means that it is a reality that the system is made up of not-very-easily-controlled set of elements. They are not well coupled to each other and that is just an attribute of the problem space. Give a problem space like that, one in which we don't have the option to better integrate the pieces, what are the design patterns that help us? Pub/sub is one. What if the problem space elements are better integrated and not loosely coupled? Should we still always use Pub/sub? Doubt it.

Regarding the design concept of "loose coupling," we want to make coupling as tight as required but no tighter. Loose coupling is very commonly stated as a goal as if the looser the better. Not so.

I think the notion of coupling is much more important at the 3GL level than at the 4GL level where UML lives. The problem at the 3GL level is physical coupling that is the direct result of the use of 3GL type systems (and, sadly, some AALs). (John Lakos, in Large Scale C++ Software Design, was the first person I know of to address the problem formally.) The type systems force the compilers to know more about objects than they really should, which is mostly manifested in the compile-the-world problem. But physical coupling also results in major problems for maintainability, which is why refactoring is an integral part of the OOP-based Agile development processes.

I don't see it as much of a problem at the OOA/D level because there are no type systems (other than attribute ADTs). I think you get loose coupling pretty much for free in OOA/D because of things like the abstraction of relationship navigation, isolation through interfaces, subsystem bridges, peer-to-peer collaboration, and a bunch of other things. IOW, I think the level of abstraction and the OOA/D methodologies combine to make it pretty much a non-issue. The only reason I got on this topic at all was that I was bothered by the assertion of publish/subscribe advocates that it had intrinsically less coupling and, therefore, was superior, which I don't buy at all. (If one were to use publish/subscribe at the object level, as you did in your AAL example, I think one would have to design software quite differently. I am skeptical that would work out well, but I can't demonstrate that offhand.)

H. S. Lahman

unread,
Apr 5, 2013, 12:00:26 PM4/5/13
to tho...@cintegrity.com, umlf...@googlegroups.com
Responding to Mercer-Hursh...


Right, but the direct OO  communication has less overhead.

Which is sensible in those places where tight coupling is appropriate, but when loose coupling has an advantage, the small overhead is more than justified by the advantages.

But OO coupling not tight! It is as loose as you can get and still be able to interact. An OO behavior makes and act of faith that the object on the other end of the relationship path -- whoever it might be -- has the data it needs. That act of faith is validated when the developer defines the navigation path to the right object that he knows has the right data.

In fact, the kind of bridge models employed in OOA/D have substantially less coupling than publish/subscribe infrastructures for interprocess communication, like we have been talking about, because the subscribing object within a subsystem has absolutely no knowledge of who will respond, even at the subsystem level. (Though the developer does when connecting the dots.)



> Let's try this a different way. A topic is just a shortcut. It says,
> "I publish this pile of very particular data called X (topic) and I
> put it in a very specific place (queue)."

Well, no.  A queue is a channel that delivers a message to a single end point and a topic is a channel that delivers to multiple endpoints.  The topic is typically a category of related messages, like a TV channel with a particular theme.   The publisher publishes a message to that channel because all messages about a certain subject are published there.  The subscriber listens to that channel because it is interested in messages on that subject.   The publisher and subscriber choose the channel, not the MOM.  The MOM only manages the channels.   Only with Content-Based Routing, found in ESBs, does the MOM even look at the content and then only of specified messages and only the content needed for the routing decision.  As I described before, this is an alternative to using an external routing service to make the decision.

As I have said a couple of times, the coupling is not in the transmission; it is in the subscription. No matter what the terminology is for a particular implementation, somebody (usually the developer) has to know what I quoted. You cannot create the necessary lookup tables for routing without that information.



The only semantic difference is  that in the OO case the access is
> peer-to-peer while in the publish/subscribe case there is an
> intermediary.

Actually, they are entirely different.

The implementations are different, but the semantics is the same.



More important, one has exactly  the same problem during maintenance
> when the pile of data associated with the topic changes. Now you
> potentially have to perform surgery (e.g., add new queues and topics
> or navigate to different objects) with the registration mechanism
> because some existing subscribers may still want the old pile of X
> data.

Most ESB messages are XML (which I know you despise, but let's not get distracted by that) so if a new or revised consumer needs a new piece of data than is contained in an existing message, one typically just modifies the supplier to add that data and that specific consumer to use it.  

I think you need to step back and look at the big picture semantics rather than focusing on the implementation details.

That last clause of the last sentence is the problem. I have consumer A and consumer B of a single pile of published data. The requirements change and consumer A needs to substitute a new data element in that pile for another. However, that requirements change does not affect consumer B. There is no way that you can accommodate that without both changing the subscription specification for consumer A and adding a publishing specification for the source and, somehow, providing a mapping between them.

The same thing would be true if the subscriber talked directly to the publisher in an OO fashion. The difference lies in the intermediary for publish/subscribe. Now the subscriber has to tell the intermediary that the subscription is changed and the publisher has to tell the intermediary there is a new pile of data being published. More to the point, the construction of the lookup tables in the intermediary have to change. Now there is coupling between three objects rather than two and the internals of three objects need to change rather than two.

In both the OO case and the publish/subscribe case, the developer has to know what all the symbols mean and provide an implementation mechanism for connecting them all up. The developer can come up with all sorts of cute tricks so that superficially nobody seems to know anything. But all the developer is doing is providing some sort of static structure that is a surrogate for the information in my quote. The semantics do not change and the information necessary for routing does not change.

You might also try thinking about it this way. There are two different sources of coupling here that we need to separate. The publisher has construct a data packet to put on the queue that is formatted in a predefined way. Similarly, when the subscriber pulls that data packet off the queue, it expects it to be formatted in a particular way to decode it. The encode/decode of that data packet is an essential part of the of the publish/subscribe contract and it represents coupling between subscriber and publisher. The intermediary in publish/subscribe does not need to understand that format to transmit the message, as you point out. However, the coupling between publisher and subscriber is exactly the same as the OO peer-to-peer case in encoding/decoding an event data packet. One way or another, the developer is hard-coding that encode/decode somewhere in both cases so that it is consistent.

The second part of the coupling is the routing. That's very straight forward for the peer-to-peer OO case: the developer abstractly defines the relationship navigation. The subscriber needs to know nothing, other than whoever is on the other end of the path has the data it needs. The situation is similar in publisher/subscribe. The developer provides a mapping for the data packet the publisher provides and the one the subscriber needs, which can be a simple as a name for the data packet, queue, topic, whatever. That mapping mechanism enables the initialization of the routing lookup tables in the intermediary. However, it is exactly equivalent, in principle, to what the OO developer does when defining relationship navigation. That mapping is still identity coupling.

Once again, the difference is that the identity coupling is directly between subscriber and source in the OO case, but it is between subscriber/intermediary and then intermediary/publisher in the publish/subscribe case. In addition, the internals of the intermediary (i.e., lookup tables) also implement that identity mapping.

Bottom line: no matter how the developer hides the mapping in symbols, naming conventions, and mechanisms, the encode/decode mapping of explicit information still exists between subscriber and source and the identity mapping for that links subscriber and source still exist explicitly in any scheme to communicate data. Corollary: identity mapping through and intermediary is more complex and, consequently, involves more coupling -- regardless of the implementation mechanism.

Thomas Mercer-Hursh, Ph.D.

unread,
Apr 5, 2013, 2:59:16 PM4/5/13
to umlf...@googlegroups.com, H. S. Lahman
On 4/5/2013 11:23 AM, H. S. Lahman wrote:
> (If one were to use publish/subscribe at the object level, as you did
> in your AAL example, I think one would have to design software quite
> differently. I am skeptical that would work out well, but I can't
> demonstrate that offhand.)

Just to be clear, I am a fan of pub/sub systems such as I have been
describing primarily for subsystem communication. There is also a role
within subsystems for a more directly connected pub/sub, but it is a
limited and specific role and certainly I would never advocate pub/sub
for routine object communication.

Thomas Mercer-Hursh, Ph.D.

unread,
Apr 5, 2013, 3:20:05 PM4/5/13
to umlf...@googlegroups.com
On 4/5/2013 11:00 AM, H. S. Lahman wrote:
> In fact, the kind of bridge models employed in OOA/D have
> substantially less coupling than publish/subscribe infrastructures
> for interprocess communication, like we have been talking about,
> because the subscribing object within a subsystem has absolutely no
> knowledge of who will respond, even at the subsystem level. (Though
> the developer does when connecting the dots.)

The kind of bridges I am familiar with from our communication are very
comparable to the MoM and ESB systems, not in method of operation, but
in external functionality.

> As I have said a couple of times, the coupling is not in the
> transmission; it is in the subscription. No matter what the
> terminology is for a particular implementation, somebody (usually the
> developer) has to know what I quoted. You cannot create the necessary
> lookup tables for routing without that information.

There are no lookup tables.

> That last clause of the last sentence is the problem. I have consumer
> A and consumer B of a single pile of published data. The requirements
> change and consumer A needs to substitute a new data element in that
> pile for another. However, that requirements change does not affect
> consumer B. There is no way that you can accommodate that without
> both changing the subscription specification for consumer A and
> adding a publishing specification for the source and, somehow,
> providing a mapping between them.

Well, no. Much of the time one simply adds an additional data element
to the message. A uses it and B ignores it. One only needs to touch
all the consumers in cases where one is changing the existing message,
e.g., changing the units of an element or breaking down a field in a
different way. E.g., our old example about a system that uses a 5
digit zip in the message and a new consumer comes along and needs 9
digits. By adding an addition element for the extra 4, all existing
consumers keep paying attention to only 5 and only the new consumer
notices the new element.

> The same thing would be true if the subscriber talked directly to the
> publisher in an OO fashion. The difference lies in the intermediary
> for publish/subscribe. Now the subscriber has to tell the
> intermediary that the subscription is changed and the publisher has
> to tell the intermediary there is a new pile of data being published.
> More to the point, the construction of the lookup tables in the
> intermediary have to change. Now there is coupling between three
> objects rather than two and the internals of three objects need to
> change rather than two.

Except there are no lookup tables and, except for Content-Based Routing
which I have discussed previously, the intermediary doesn't look at the
contents at all and does not have to change in any way for something
like this.

> In both the OO case and the publish/subscribe case, the developer has
> to know what all the symbols mean and provide an implementation
> mechanism for connecting them all up.

One of the big difference here ... and one I know you hate ... is that
the OO link is typically going to be composed of specific, typed
arguments so any change in argument count or type is going to require
touching all consumers while the MoM or ESB link is typically XML, a
single blob, so one can freely add new elements to it and not disturb
any consumer who does not need that element.

> However, the
> coupling between publisher and subscriber is exactly the same as the
> OO peer-to-peer case in encoding/decoding an event data packet. One
> way or another, the developer is hard-coding that encode/decode
> somewhere in both cases so that it is consistent.

Except that with an XML message, the recipient can continue to read the
elements it needs and ignore the elements it doesn't need.

> That mapping
> mechanism enables the initialization of the routing lookup tables in
> the intermediary. However, it is exactly equivalent, in principle, to
> what the OO developer does when defining relationship navigation.
> That mapping is still identity coupling.

The two are quite different. For direct OO relationship connections,
one needs navigate connections with matching signatures. For a MOM
link, the only connection is the name. It is quite feasible to publish
a message on a channel which has nothing in it that the subscriber
expects. That won't work very well, of course, when the message
arrives, but the MOM is perfectly happy to deliver it.


H. S. Lahman

unread,
Apr 6, 2013, 12:04:48 PM4/6/13
to umlf...@googlegroups.com
Responding to Mercer-Hursh...


As I have said a couple of  times, the coupling is not in the
> transmission; it is in the subscription. No matter what the
> terminology is for a particular implementation, somebody (usually the
> developer) has to know what I quoted. You cannot create the necessary
> lookup tables for routing without that information.

There are no lookup tables.

In any publish/subscribe infrastructure, the subscriber needs to be notified when a message of interest is placed on a queue. Otherwise, the subscriber is just polling a queue and it is not publish/subscribe. The most efficient way to do that is via a lookup table to address the notifications. (Not to mention that when the publisher adds a message, the intermediary needs to find the right queue address, based on message and publisher identity, to push the message.)



> That last clause of the last sentence is the problem. I have consumer
> A and consumer B of a single pile of published data. The requirements
> change and consumer A needs to substitute a new data element in that
> pile for another. However, that requirements change does not affect
> consumer B. There is no way that you can accommodate that without
> both changing the subscription specification for consumer A and
> adding a publishing specification for the source and, somehow,
> providing a mapping between them.

Well, no.  Much of the time one simply adds an additional data element to the message. 

Right. Then they can sit around and bemoan the fact that their applications are difficult to maintain because there is a sea of global data floating around and nobody knows where and if it is actual being used. Developers who do things like that should have their thumbs broken. That sort of thing was a no-no in the '70s!



The same thing would be true if  the subscriber talked directly to the
> publisher in an OO fashion. The difference lies in the intermediary
> for publish/subscribe. Now the subscriber has to tell the
> intermediary that the subscription is changed and the publisher has
> to tell the intermediary there is a new pile of data being published.
> More to the point, the construction of the lookup tables in the
> intermediary have to change. Now there is coupling between three
> objects rather than two and the internals of three objects need to
> change rather than two.

Except there are no lookup tables and, except for Content-Based Routing which I have discussed previously, the intermediary doesn't look at the contents at all and does not have to change in any way for something like this.

LOL. Really? The subscriber gets connected to the right queue by its fairy godmother? There has to be an explicit identity mapping in the within the intermediary implementation to deal with the inherent *:* relationship between subscribers and publishers. Dealing with that *:* relationship is what publish/subscribe infrastructure does.



In both the OO case and the  publish/subscribe case, the developer has
> to know what all the symbols mean and provide an implementation
> mechanism for connecting them all up.

One of the big difference here ... and one I know you hate ... is that the OO link is typically going to be composed of specific, typed arguments so any change in argument count or type is going to require touching all consumers while the MoM or ESB link is typically XML, a single blob, so one can freely add new elements to it and not disturb any consumer who does not need that element.

Yes, XML is probably the most overused performance pig in IT today, but it is an implementation technology. We're talking about identity mapping between publisher and subscriber here at the design level. Both have to have exactly the same understanding of the expected semantic data content and exactly the same understanding of what formatting conventions with be used at the implementation level (e.g., whether the publisher is silly enough to use XML).

But even that is irrelevant. The OO link in OOA/D is just {message ID, <data packet>}. At the OOA/D level only the semantics of the data packet is defined, not its implementation. More importantly, the OO paradigm enforces the very good practice of only accessing exactly what you need, per my response above about adding data that the message receiver doesn't need. Interfaces exist to enforce problem semantics and reduce coupling, not to make life easier for lazy developers.



That mapping
> mechanism enables the initialization of the routing lookup tables in
> the intermediary. However, it is exactly equivalent, in principle, to
> what the OO developer does when defining relationship navigation.
> That mapping is still identity coupling.

The two are quite different.   For direct OO relationship connections, one needs navigate connections with matching signatures.

That's 3GL thinking around type systems. In principal, in AAL you just have

myRef = this -> A6 -> A2
total = total + myRef.value.  // implied synchronous service getter

The first line addresses the source without even a name (the myRef is arbitrary). The second line decodes the data packet the same way you would in publish/subscribe. Everything you are talking about is low level 3GL implementation in a particular computing environment. Here, though, none of that type system coupling is present and we see real minimum coupling. Hopefully, however the AAL is implemented will be the minimum possible coupling for the given environment.

<aside>
This is why I said to George that I think coupling is largely irrelevant at the OOA/D level. You already have a minimum coupling specification of the semantics in OOA/D. You depend on the OOP developer to do the right thing when implementing it. Shameless plug: And if you are a translationist, you don't care if there is lots of coupling at the 3GL level because you don't maintain the 3GL code; it gets regenerated automatically when the model semantics change. B-)
</aside>

Thomas Mercer-Hursh, Ph.D.

unread,
Apr 7, 2013, 11:52:48 AM4/7/13
to umlf...@googlegroups.com, H. S. Lahman
On 4/6/2013 11:04 AM, H. S. Lahman wrote:
> In any publish/subscribe infrastructure, the subscriber needs to be
> notified when a message of interest is placed on a queue. Otherwise,
> the subscriber is just polling a queue and it is not
> publish/subscribe. The most efficient way to do that is via a lookup
> table to address the notifications. (Not to mention that when the
> publisher adds a message, the intermediary needs to find the right
> queue address, based on message and publisher identity, to push the
> message.)

My point was that the connection is made entirely based on the channel,
not anything to do with the message. In individual object pub/sub, one
subscribes to an event on an object, but in MOM pub/sub, one publishes
and subscribes to a channel and the subscriber receives all traffic on
that channel.

> Right. Then they can sit around and bemoan the fact that their
> applications are difficult to maintain because there is a sea of
> global data floating around and nobody knows where and if it is
> actual being used. Developers who do things like that should have
> their thumbs broken. That sort of thing was a no-no in the '70s!

Obviously, one needs to do these things with discretion. While one
clearly doesn't want one global message containing all information that
anyone might want to use, one also doesn't want 79 different messages,
each with a slight variation of the contents. With some consideration,
there are likely to be natural messages corresponding to natural units
in the real world and most consumers are going to potentially need the
same information.

Point being that it is a straw man to claim that pub/sub is going to
require rework of all consumers when the message changes. If the
structure of the message really does change because of some conceptual
change, then of course the consumers need to change correspondingly ...
that is going to be true no matter how the information is passed. If
one is merely adding one bit of information for one consumer, then the
other consumers do *not* need to change. If, however, the nature of
the addition means that it is a genuinely different message, then one is
going to create a new message, not reuse the one that is already there.

> LOL. Really? The subscriber gets connected to the right queue by its
> fairy godmother? There has to be an explicit identity mapping in the
> within the intermediary implementation to deal with the inherent *:*
> relationship between subscribers and publishers. Dealing with that
> *:* relationship is what publish/subscribe infrastructure does.

This is a design issue, not the MOM software looking at message
contents. At design time one has a catalog of messages and channels
and chooses the ones that one is interested in. At run time the
infrastructure merely delivers published messages to the connections
which have subscribed to the appropriate channel.

> Both have to have exactly the same understanding of the
> expected semantic data content and exactly the same understanding of
> what formatting conventions with be used at the implementation level
> (e.g., whether the publisher is silly enough to use XML).

Yes and no. This is where implementation matters. If one passes
individual arguments in a message signature, then clearly everyone needs
the same signature. If one passes an XML blob or an object, then each
consumer merely needs to know how to extract the information it needs
and doesn't care if there might be other information in there.

> Everything /you /are talking about is low
> level 3GL implementation in a particular computing environment. Here,
> though, none of that type system coupling is present and we see real
> minimum coupling. Hopefully, however the AAL is implemented will be
> the minimum possible coupling for the given environment.

No argument that, at the model level, one can't see any of this. It is
a question of how one translates the model into an implementation.


Scott Finnie

unread,
Apr 7, 2013, 5:25:19 PM4/7/13
to umlf...@googlegroups.com, Dan George, H. S. Lahman, tho...@cintegrity.com
Dan, that's a wonderfully clear, precise and comprehensive description of the situation.  Couldn't agree more.

- Scott.

On 07/04/2013 18:19, Dan George wrote:
<aside author="H.S. Lahman" >

This is why I said to George that I think coupling is largely irrelevant at the OOA/D level. You already have a minimum coupling specification of the semantics in OOA/D. You depend on the OOP developer to do the right thing when implementing it. Shameless plug: And if you are a translationist, you don't care if there is lots of coupling at the 3GL level because you don't maintain the 3GL code; it gets regenerated automatically when the model semantics change. B-)
</aside>

AND

"No argument that, at the model level, one can't see any of this.  It is a question of how one translates the model into an implementation." - Mercer-Hursh

That was the guidance I was looking for. "Data-Centric" architecture is just an architecture. It does not change the functional requirements. When functional requirements call for interaction between objects, we might apply different designs but the design choice does not change the functional requirement.

Methodology will have an overall practical impact. The elaboration advocates will strive to make the code match the model. They do this by necessity because they don't have a means, such as formal translations, to maintain traceability. They pay for it with some amount of obfuscation as the analysis model morphs to reflect the form of the design. If I were using elaboration, I'd probably make that trade-off. Since I'm using translation, I can keep the analysis model clear, concise and, because it is clear and concise, I can validate it and maintain it. (Eric Evins' Domain-Driven Design methodology is a really clear example of the elaboration approach).

I can now proceed with developing my translations with the confidence that the EUML/MBSE rules in the OOA/D model remain valid even if the architecture is "Data-Centric."

Thank you,
Dan
--
--
You received this message because you are subscribed to the Google
Groups "UML Forum" group.
Public website: www.umlforum.com
To post to this group, send email to umlf...@googlegroups.com
To unsubscribe from this group, send email to
umlforum+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/umlforum?hl=en?hl=en
 
---
You received this message because you are subscribed to the Google Groups "UML Forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to umlforum+u...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

H. S. Lahman

unread,
Apr 8, 2013, 11:26:55 AM4/8/13
to umlf...@googlegroups.com
Responding to George,

I can now proceed with developing my translations with the confidence that the EUML/MBSE rules in the OOA/D model remain valid even if the architecture is "Data-Centric."

At the risk of raining on your parade, I don't think it is that simple in general. I agree that is true for what we would consider external bridges. However, I don't think that is true for design within subsystems at the object level.

With the technologies we have today, I don't think it is possible to be completely independent of an architecture at the 4GL level. One example is that, as translationists, we always use object state machines and an asynchronous model to deal with behavior interactions. When we do that, we have certain expectations about how the architecture is implemented (e.g., two events between the same state machines will be consumed in the same order that they were generated; we can't design interacting state machines unambiguously in all cases without that architectural assumption).

But the data-centric case, I think it goes much further than that. The analogy I would make is translating to a Functional Programming language, like Haskell. You might be able to do it, but the transformation engine would be a nightmare to design because the paradigms are very different (if not actually incompatible). That's because there is no persistent data. (A weak analogy would be that the overall design looks like a huge DFD except there are no data stores.) I think the same thing is true of a data centric architecture. In a data-centric architecture algorithmic sequence is essentially meaningless because every action is triggered by a data update. I think that has some important implications for how one would daisy-chain actions together to solve a problem. I think that to ensure correctness for data integrity and referential integrity, one would have to think very differently about the design. (To drive a weak analogy into the ground, the overall design would look like a DFD where all the arrows were to/from data stores with none between processes.)

H. S. Lahman

unread,
Apr 8, 2013, 12:02:09 PM4/8/13
to umlf...@googlegroups.com
Responding to Mercer-Hursh...


In any publish/subscribe  infrastructure, the subscriber needs to be
> notified when a message of interest is placed on a queue. Otherwise,
> the subscriber is just polling a queue and it is not
> publish/subscribe. The most efficient way to do that is via a lookup
> table to address the notifications. (Not to mention that when the
> publisher adds a message, the intermediary needs to find the right
> queue address, based on message and publisher identity, to push the
> message.)

My point was that the connection is made entirely based on the channel, not anything to do with the message.   In individual object pub/sub, one subscribes to an event on an object, but in MOM pub/sub, one publishes and subscribes to a channel and the subscriber receives all traffic on that channel.

That's an implementation detail. Somebody, somewhere has to make sure the publisher and subscriber are both talking to the right channel, topic, queue, or whatever. The channel, or whatever, has identity and both need to map that identity properly through the intermediary during registration.



Right. Then they can sit around  and bemoan the fact that their
> applications are difficult to maintain because there is a sea of
> global data floating around and nobody knows where and if it is
> actual being used. Developers who do things like that should have
> their thumbs broken. That sort of thing was a no-no in the '70s!

Obviously, one needs to do these things with discretion.   While one clearly doesn't want one global message containing all information that anyone might want to use, one also doesn't want 79 different messages, each with a slight variation of the contents.

I couldn't disagree more. We have decades of experience to indicate that passing data that the receiver never needs is a Bad Idea. Don't Do That! (It is also a usually a bad idea to provide data that the receiver only sometimes needs, but there are some, relatively rare, situations where that is necessary. You can almost always get around that because the condition for using the data is almost always external to the object semantics.)

Note that doing so breaks encapsulation in an egregious way. The receiver now knows that the sender has that piece of information, even when the receiver has no need of it. That's unnecessary coupling, baby! One manifestation of that is: what happens when no one needs that data anymore and it should be removed from the source? How do you know without looking at every possible receiver?




LOL. Really? The subscriber  gets connected to the right queue by its
> fairy godmother? There has to be an explicit identity mapping in the
> within the intermediary implementation to deal with the inherent *:*
> relationship between subscribers and publishers. Dealing with that
> *:* relationship is what publish/subscribe infrastructure does.

This is a design issue, not the MOM software looking at message contents.   At design time one has a catalog of messages and channels and chooses the ones that one is interested in. 

That's exactly what I am talking about when I keep saying the identity mapping is an issue for registering the subscription rather than the transmission.



Both have to have exactly the  same understanding of the
> expected semantic data content and exactly the same understanding of
> what formatting conventions with be used at the implementation level
> (e.g., whether the publisher is silly enough to use XML).

Yes and no.  This is where implementation matters.   If one passes individual arguments in a message signature, then clearly everyone needs the same signature.   If one passes an XML blob or an object, then each consumer merely needs to know how to extract the information it needs and doesn't care if there might be other information in there.

This coupling was about the encode/decode of the data packet. It doesn't matter how the data packet was transmitted; publisher and subscriber must map the content exactly the same way and that is coupling.

One reason the OO paradigm uses by-value data packets in simple messages is to force encode/decode of the data packet. That way, sender and receiver only share the definition of the data packet. IOW, neither knows anything about type signatures, storage, computation, or anything else in the other entity.



Everything /you /are talking  about is low
> level 3GL implementation in a particular computing environment. Here,
> though, none of that type system coupling is present and we see real
> minimum coupling. Hopefully, however the AAL is implemented will be
> the minimum possible coupling for the given environment.

No argument that, at the model level, one can't see any of this.  It is a question of how one translates the model into an implementation.

Then what are we debating? At the OOA/D level where things are designed, that sort of implementation isn't relevant. That is, the original discussion started with an assertion (presumably from some publish/subscribe vendor) that designing to publish/subscribe had an inherent advantage of OO design because of better coupling. If the coupling is only in the implementation, rather than the design, then that is not true.

Dan George

unread,
Apr 8, 2013, 1:48:14 PM4/8/13
to umlf...@googlegroups.com
Darn... these are good points. I wishfully wanted to limit DDS to providing attributes and calling it good. I will be compromising the goals of the architecture if there is any form of message passing. Rationalizing the DDS events as "essentially the same thing" as OOA events is not working. I had been mostly concerned with parameter-less events. The rationalization kind of works there if you wish really, really hard. Admitting that OOA inherently relies on events with optional by-value parameters could put me in checkmate. A system whose objects work with no coupling to the internal state of other objects (however indirect) nor the ability to receive messages with values will be much different looking than one with where those facilities are available.


--
--
You received this message because you are subscribed to the Google
Groups "UML Forum" group.
Public website: www.umlforum.com
To post to this group, send email to umlf...@googlegroups.com
To unsubscribe from this group, send email to
umlforum+u...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/umlforum?hl=en?hl=en
 
---
You received this message because you are subscribed to a topic in the Google Groups "UML Forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/umlforum/me_qFhetdXs/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to umlforum+u...@googlegroups.com.

Dan George

unread,
Apr 8, 2013, 1:31:46 PM4/8/13
to umlf...@googlegroups.com
I'm getting good at marching in the rain. I live in Northwestern Oregon :).

I recently got a rude awakening wrt to synchronous access to properties. I naively supposed that the compiler writers had a solution for distributed objects. Nope. So, yes, I'm gaining appreciation for compromise. Still, it is good to have a basis form which to compromise.

Any time the properties change, DDS gives me a mechanism to implement the signal in the OO model as a change event on the topic. I have an "I'm done" but the "with what?" part isn't there. The OO model clearly shows "with what" in the context of the state machine. The DC architecture removes the state machine context. The subscriber has to infer the "with what," based only on the attribute values. More accurately, the receiver must not care why the values assumed their present state; the receiver only reacts to the values themselves. The DC assumption is that any coupling between one object, A, and another object's, B's, internal state is unnecessary; the system will work even if A knows only the values of B's attributes. Hmmm...


Thomas Mercer-Hursh, Ph.D.

unread,
Apr 8, 2013, 5:45:18 PM4/8/13
to umlf...@googlegroups.com, H. S. Lahman
On 4/8/2013 11:02 AM, H. S. Lahman wrote:
> That's an implementation detail. Somebody, somewhere has to make sure
> the publisher and subscriber are both talking to the right channel,
> topic, queue, or whatever. The channel, or whatever, has identity and
> both need to map that identity properly through the intermediary
> during registration.

I think we have clarified the difference between model and
implementation. My point was that in the implementation, the MOM does
not do routing on the basis of message contents, except for the special
case of Content Based Routing, an ESB service that takes the place of an
external component.

> I couldn't disagree more. We have decades of experience to indicate
> that passing data that the receiver never needs is a Bad Idea. Don't
> Do That! (It is also a usually a bad idea to provide data that the
> receiver only sometimes needs, but there are some, relatively rare,
> situations where that is necessary. You can almost always get around
> that because the condition for using the data is almost always
> external to the object semantics.)
>
> Note that doing so breaks encapsulation in an egregious way. The
> receiver now knows that the sender has that piece of information,
> even when the receiver has no need of it. That's unnecessary
> coupling, baby! One manifestation of that is: what happens when /no
> one/ needs that data anymore and it should be removed from the
> source? How do you know without looking at every possible receiver?

You have a point and it is certainly standards I would adhere to in
direct object interactions, but I think there is a trade-off in
messaging between subsystems such as I have been discussing. There is
a choice here between making a message specific to producer and
consumer, which provides the cleanliness and traceability you advocate,
but which results in no message reuse and effective direct coupling of
those two components. If one accepts that not all consumers will
necessarily need all data elements in a message, then one gets message
reuse and loose coupling.

> Then what are we debating?

Good question!

> At the OOA/D level where things are
> designed, that sort of implementation isn't relevant. That is, the
> original discussion started with an assertion (presumably from some
> publish/subscribe vendor) that designing to publish/subscribe had an
> inherent advantage of OO design because of better coupling. If the
> coupling is only in the implementation, rather than the design, then
> that is not true.

I think my point is that in the implementation, there is a role for
pub/sub in communications between subsystems because of the flexibility
one gains from loose coupling. From our off-line discussions it is
clear you are unlikely to agree because that approach is full of horrors
like XML.

H. S. Lahman

unread,
Apr 9, 2013, 12:25:11 PM4/9/13
to umlf...@googlegroups.com
Responding to Mercer-Hursh...

>
>> I couldn't disagree more. We have decades of experience to indicate
> > that passing data that the receiver never needs is a Bad Idea. Don't
> > Do That! (It is also a usually a bad idea to provide data that the
> > receiver only sometimes needs, but there are some, relatively rare,
> > situations where that is necessary. You can almost always get around
> > that because the condition for using the data is almost always
> > external to the object semantics.)
> >
> > Note that doing so breaks encapsulation in an egregious way. The
> > receiver now knows that the sender has that piece of information,
> > even when the receiver has no need of it. That's unnecessary
> > coupling, baby! One manifestation of that is: what happens when /no
> > one/ needs that data anymore and it should be removed from the
> > source? How do you know without looking at every possible receiver?
>
> You have a point and it is certainly standards I would adhere to in
> direct object interactions, but I think there is a trade-off in
> messaging between subsystems such as I have been discussing. There is
> a choice here between making a message specific to producer and
> consumer, which provides the cleanliness and traceability you
> advocate, but which results in no message reuse and effective direct
> coupling of those two components. If one accepts that not all
> consumers will necessarily need all data elements in a message, then
> one gets message reuse and loose coupling.

Possibly. B-) One way of viewing a subsystem, though, is that it is an
object on steroids and all the same methodological ideas apply to
interactions between them. That's why we have formalisms like the Bridge
Model.

>> At the OOA/D level where things are
> > designed, that sort of implementation isn't relevant. That is, the
> > original discussion started with an assertion (presumably from some
> > publish/subscribe vendor) that designing to publish/subscribe had an
> > inherent advantage of OO design because of better coupling. If the
> > coupling is only in the implementation, rather than the design, then
> > that is not true.
>
> I think my point is that in the implementation, there is a role for
> pub/sub in communications between subsystems because of the
> flexibility one gains from loose coupling. From our off-line
> discussions it is clear you are unlikely to agree because that
> approach is full of horrors like XML.

When you get into things like DDS, it becomes a default infrastructure.
I don't think it is a good idea to use it like that. I think the use of
publish/subscribe should be determined by the presence of one or both of
two specific problem space requirements. One is that publishers and/or
suppliers can come and go. That's the classic networking issue of who is
available. The second is the situation is where there is a *:*
relationship between potential subscribers and publishers where only one
pair is linked at a time. This is the classic channel/port allocation
situation that is also common in networking. Since those requirements
are commonplace in distributed processing, it is not terribly surprising
that publish/subscribe is an attractive implementation architecture for
interprocess communication. But I don't see using it if publisher and
subscriber are always around and always mate with one another.

H. S. Lahman

unread,
Apr 9, 2013, 12:03:38 PM4/9/13
to umlf...@googlegroups.com
Responding to George...

Clearly I have too much time on my hands lately...

> I'm getting good at marching in the rain. I live in Northwestern
> Oregon :).

In my misspent youth I was a field geologist. Geologists are the only
people who regard rain as good working weather -- because it keeps the
deer flies off.

>
> I recently got a rude awakening wrt to synchronous access to
> properties. I naively supposed that the compiler writers had a
> solution for distributed objects. Nope. So, yes, I'm gaining
> appreciation for compromise. Still, it is good to have a basis form
> which to compromise.
>
> Any time the properties change, DDS gives me a mechanism to implement
> the signal in the OO model as a change event on the topic. I have an
> "I'm done" but the "with what?" part isn't there. The OO model clearly
> shows "with what" in the context of the state machine. The DC
> architecture removes the state machine context. The subscriber has to
> infer the "with what," based only on the attribute values. More
> accurately, the receiver must not care why the values assumed their
> present state; the receiver only reacts to the values themselves. The
> DC assumption is that any coupling between one object, A, and another
> object's, B's, internal state is unnecessary; the system will work
> even if A knows only the values of B's attributes. Hmmm...

OK. But isn't that the assumption one always makes with distributed
(external) bridges (i.e., the receiver can't know why it is getting the
message; only what it should do with it)?

I see two cases for distributed contexts, whether subsystems or
individual objects. One is the behavioral aspect where one entity did
something that the other needs to know about. That's the classic,
asynchronous I'm Done message. In a non distributed situation that
message would <almost always> have no data packet because the receiver
goes and gets whatever data it needs within subsystem scope. In the
distributed situation, though, the receiver cannot conveniently go get
the data it needs, so the relevant state data has to be in the message
data packet. (By "conveniently" I mean there are substantial data
integrity issues around potential distributed processing delays.
Including the relevant state data with the message at least ensures the
data was consistent with the reason the message was sent.)

The second case is essentially synchronization when both entities have a
view of the same data. In that case, a change in the "owner's" data
value must be communicated immediately to ensure data consistency. The
DDS sort of publish/subscribe is ideal for this -- with a big
reservation. In the OOA/D data access is assumed to be synchronous,
which ensures the synchronization. The problem is that implementation in
the distributed case is not synchronous (generally speaking). You need
Mellor's legerdemain of a synchronous wormhole that somehow produces the
same results as if it were synchronous. For example, the communication
infrastructure might ensure that no messages sent to the receiver after
the synchronization message will be consumed before the synchronization
message. I don't know enough about DDS to know whether it guarantees
something like that.

Thomas Mercer-Hursh, Ph.D.

unread,
Apr 9, 2013, 5:06:19 PM4/9/13
to umlf...@googlegroups.com, H. S. Lahman
On 4/9/2013 11:25 AM, H. S. Lahman wrote:
> Possibly. B-)

I think that is about as close as we are going to get! :)

> One way of viewing a subsystem, though, is that it is
> an object on steroids and all the same methodological ideas apply to
> interactions between them. That's why we have formalisms like the
> Bridge Model.

Whereas, I would be inclined to say that they were very similar, but not
identical, which is why one utilizes bridges and MOMs and the like in
connecting them, when one wouldn't use such things for individual objects.

> I think the use of publish/subscribe should be determined by the
> presence of one or both of two specific problem space requirements.

As a general principle, I agree, but with the qualification that these
requirements tend to apply to many subsystem boundary communications
(not all).

> One is that publishers and/or suppliers can come and go. That's the
> classic networking issue of who is available. The second is the
> situation is where there is a *:* relationship between potential
> subscribers and publishers where only one pair is linked at a time.
> This is the classic channel/port allocation situation that is also
> common in networking. Since those requirements are commonplace in
> distributed processing, it is not terribly surprising that
> publish/subscribe is an attractive implementation architecture for
> interprocess communication. But I don't see using it if publisher and
> subscriber are always around and always mate with one another.

The second seems not quite on to me since there is not really a one to
one link made in a MOM. It is much more like a message is broadcast to
a channel and whomever is listening to the channel gets the message
without any direct connection being made. I.e., the sequence is
publisher to channel, then channel to one or more subscribers, with no
connection directly between publisher and subscriber.


Dan George

unread,
Apr 9, 2013, 5:56:29 PM4/9/13
to umlf...@googlegroups.com
"OK. But isn't that the assumption one always makes with distributed (external) bridges (i.e., the receiver can't know why it is getting the message; only what it should do with it)?" - H.S. Lahman

The receiver isn't getting a message at all but somehow knows that the observed object's attributes changed value. Could be polling or whatever. There is no indication of what the change means. I've understood the I'm Done message to relate to the behavioral contract. The I'm Done has some meaning in the context of that contract. For example, a clerk might not have a "busy" attribute but it might be required to signal when it is done being free or busy. Those two I'm Done signals would have different ids. In the DC system, I have to make a "busy" attribute.

DDS isn't a message passing mechanism. It distributes object attributes. The attributes of object A are available in object B execution environment within a specified time after changing. By delaying signals to B by that time, B can use the attributes as if A were local. A changes its attributes and sends "I'm done." The EUML VM holds the "I'm done," stops time, to let DDS do it's distributing. The EUML VM dispatches the "I'm done" event and object be uses object A's attribute values.

Obviously, this requires a global clock and relatively high quality communications. It works even in a real-time system as long as the domain latency tolerance is commensurate with the communications bandwidth.

Still, if I'm not allowed to use an "I'm Done" signal in the architecture. The restriction is going to bleed into the applicaiton domain and result in less clear requirements specification. Sigh.


--
--
You received this message because you are subscribed to the Google
Groups "UML Forum" group.
Public website: www.umlforum.com
To post to this group, send email to umlf...@googlegroups.com
To unsubscribe from this group, send email to

For more options, visit this group at
http://groups.google.com/group/umlforum?hl=en?hl=en

--- You received this message because you are subscribed to a topic in the Google Groups "UML Forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/umlforum/me_qFhetdXs/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to umlforum+unsubscribe@googlegroups.com.

H. S. Lahman

unread,
Apr 10, 2013, 12:02:42 PM4/10/13
to umlf...@googlegroups.com
Responding to Mercer-Hursh...


I think the use of  publish/subscribe should be determined by the
> presence of one or both of two specific problem space requirements.

As a general principle, I agree, but with the qualification that these requirements tend to apply to many subsystem boundary communications (not all).

Fine. I'm just saying that it's not a universal solution. You don't want to methodologically say, "Oh, I've got distributed communications, therefore I must use publish/subscribe."



One is that publishers and/or  suppliers can come and go. That's the
> classic networking issue of who is available. The second is the
> situation is where there is a *:* relationship between potential
> subscribers and publishers where only one pair is linked at a time.
> This is the classic channel/port allocation situation that is also
> common in networking. Since those requirements are commonplace in
> distributed processing, it is not terribly surprising that
> publish/subscribe is an attractive implementation architecture for
> interprocess communication. But I don't see using it if publisher and
> subscriber are always around and always mate with one another.

The second seems not quite on to me since there is not really a one to one link made in a MOM.  It is much more like a message is broadcast to a channel and whomever is listening to the channel gets the message without any direct connection being made.    I.e., the sequence is publisher to channel, then channel to one or more subscribers, with no connection directly between publisher and subscriber.

That's another low level implementation issue. It is still touted as publish/subscribe and it will work just fine for *:* relationships with, at most, minor implementation adjustments.

H. S. Lahman

unread,
Apr 10, 2013, 12:40:08 PM4/10/13
to umlf...@googlegroups.com
Responding to George...

"OK. But isn't that the assumption one always makes with distributed (external) bridges (i.e., the receiver can't know why it is getting the message; only what it should do with it)?" - H.S. Lahman

The receiver isn't getting a message at all but somehow knows that the observed object's attributes changed value. Could be polling or whatever. There is no indication of what the change means. I've understood the I'm Done message to relate to the behavioral contract. The I'm Done has some meaning in the context of that contract. For example, a clerk might not have a "busy" attribute but it might be required to signal when it is done being free or busy. Those two I'm Done signals would have different ids. In the DC system, I have to make a "busy" attribute.

I just see that as basic trading off of behavior vs. data in the design. The Clerk can interact directly by sending an asynchronous I'm Done event that triggers a transition somewhere else or it can set an attribute that somebody else looks at synchronously.

I then think once you opt for the design strategy of solving the problem with data that publish/subscribe becomes relevant. Does someone need to know immediately that the data has changed? Then you need to publish.

But I think that brings up an interesting point for design at the subsystem level. If someone within the subsystem needed to know when the data changed, that would tend to push one towards the dynamic solution strategy rather than the static solution strategy. IOW, the complexity of publish/subscribe would tend to drive one to the more direct dynamic strategy -- I'm Done changing the state and you need to know that, so I'll tell you directly.


DDS isn't a message passing mechanism. It distributes object attributes. The attributes of object A are available in object B execution environment within a specified time after changing. By delaying signals to B by that time, B can use the attributes as if A were local. A changes its attributes and sends "I'm done." The EUML VM holds the "I'm done," stops time, to let DDS do it's distributing. The EUML VM dispatches the "I'm done" event and object be uses object A's attribute values.

Obviously, this requires a global clock and relatively high quality communications. It works even in a real-time system as long as the domain latency tolerance is commensurate with the communications bandwidth.

A quibble, but I think you can do it without a global clock. You just need very precise local clocks and a knowledge of the offset between them at some point in time. Then you can compare message time stamps from different clock locales.

BTW, I would argue DDS is still using messages. The attributes of A don't get to B's environment by telekinesis. (Of course, DDS might do that via remote object access. Yech.)


Still, if I'm not allowed to use an "I'm Done" signal in the architecture. The restriction is going to bleed into the applicaiton domain and result in less clear requirements specification. Sigh.

Right. That's one reason why I think it would be a problem using publish/subscribe as a general design strategy. You have to daisy-chain data updates synchronously.

Dan George

unread,
Apr 10, 2013, 2:46:01 PM4/10/13
to umlf...@googlegroups.com
"BTW, I would argue DDS is still using messages." - H.S. Lahman

Right. But, let's restrict the definition of "messages" to the outcome of GENERATE Foo:SomeEvent(parm0, parm1, parm2) TO (someObjectRef); Making attribute values available in a distributed deployment is a lower-level concept. Attribute distribution is the job of the DDS domain and doesn't show up as message passing in the application domain. The DDS domain's job is to distribute attribute values throughout the domain objects; it is the attribute distribution service domain.

The DC rules would not allow the GENERATE-TO statement.  LINK implicitly maps to subscriptions and the application domain interface must provide callbacks for DDS to cause events. Once an association has been created, the state model must explicitly handle or ignore the callback events. I'm just skeptical that the GENERATE-less model will be easy to validate.

I'm going to plow forward and keep my GENERATE statements that result in good ol' message sends in a service domain. Perhaps someone will show me how I can eliminate that service domain but if they cannot I'll still have a working system.


--
--
You received this message because you are subscribed to the Google
Groups "UML Forum" group.
Public website: www.umlforum.com
To post to this group, send email to umlf...@googlegroups.com
To unsubscribe from this group, send email to

For more options, visit this group at
http://groups.google.com/group/umlforum?hl=en?hl=en
 
---
You received this message because you are subscribed to a topic in the Google Groups "UML Forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/umlforum/me_qFhetdXs/unsubscribe?hl=en.
To unsubscribe from this group and all its topics, send an email to umlforum+u...@googlegroups.com.

Thomas Mercer-Hursh, Ph.D.

unread,
Apr 11, 2013, 2:45:50 PM4/11/13
to umlf...@googlegroups.com, H. S. Lahman
On 4/10/2013 11:02 AM, H. S. Lahman wrote:

> Fine. I'm just saying that it's not a universal solution.

> You don't
> want to methodologically say, "Oh, I've got distributed
> communications, therefore I must use publish/subscribe."

Note that MOMs typically have an option for direct wiring of components
when that is the indicated type of behavior.


H. S. Lahman

unread,
Apr 12, 2013, 11:36:15 AM4/12/13
to umlf...@googlegroups.com
Respo9nding to George...

I'm just skeptical that the GENERATE-less model will be easy to validate.

So am I -- if one chooses the DDS approach as a design paradigm rather than a pattern.
Reply all
Reply to author
Forward
0 new messages