"Four pillars" of OOP

544 views
Skip to first unread message

Matthew Browne

unread,
Nov 29, 2023, 9:00:57 AM11/29/23
to object-co...@googlegroups.com
Hi all,
I wanted to go back and solidify my understanding of some foundational
OOP concepts. Thanks to DCI and this group, I learned what I consider to
be the real essence of OO - how the whole point is to reflect a shared
mental model of the program, as well as Alan Kay's concept of objects as
mini-computers.

However, I think it's also important to learn some of the principles and
terminology of the "nerdier" aspects of OO / OOP. The question is, which
ones are still useful today? And what are the precise definitions of the
terms? This came to mind because I've been working with my team at work
on writing better-quality and more maintainable code and I find myself
using terms like "abstraction", "encapsulation", or "single
responsibility principle", (often even outside a traditional
object-oriented context, e.g. talking about encapsulation of React
components) but I think the best way to actually explain these concepts
would be to go back to basics - a sort of refresher course on
object-orientation, which BTW many junior Javascript developers these
days aren't very familiar with in the first place (and that may be a
good thing—a clean slate!).

I started by looking up the so-called "four pillars" of object
orientation as they are often taught nowadays (and have been for many
years): abstraction, encapsulation, inheritance, and polymorphism. I
think many of us in this group already see a problem here: should
inheritance be in this list? But my first question is, where did these
pillars come from? I have been searching and reading various articles
and forum posts, and I haven't been able to trace the history yet. One
Quora answer pointed me to Grady Booch's classic book "Object-Oriented
Analysis and Design with Applications," but I read the relevant chapter
of that book and it defines a slightly different set of four principles:

1. Abstraction
2. Encapsulation
3. Modularity
4. Hierarchy

Does anyone here know how these ended up evolving into abstraction,
encapsulation, inheritance, and polymorphism in the teaching of OOP? Or
did the latter "four pillars" originally come from a different source?

And more importantly, which of these "pillars" are still useful and most
essential to the original OO vision? And to DCI?

These are big questions of course. But I think this discussion could be
helpful not only for the understanding of OO as originally intended, but
also to think about which concepts might be most useful for teaching a
new programmer DCI, alongside the DCI-specific concepts.

Thanks,
Matt

James Coplien

unread,
Nov 29, 2023, 9:36:44 AM11/29/23
to object-co...@googlegroups.com
The number one pillar of OO is messaging: anything else is beyond secondary

That requires a MOP.

Your description seems to apply to abstract data types rather than OO. Both were emerging in the OOPSLA culture of the1980s, and I think they got confounded.

As for these principles:

1. Abstraction is evil. Read “Abstraction Descant” by Gabriel. We need compression. Abstraction means to throw something away in the interest of focusing on something else. (DCI has what might be called an “abstraction boundary” below it, to its virtual machine, but that is something upon which one builds rather than being something one plies in the creation of objects.)

2. Encapsulation and modularity are nothing new. Procedures encapsulated algorithms just fine. Classes don’t even do that. Their instance methods suffer from global data — now gloriously called instance variables.

Modularity is a hopeless Polyanna wish. Following it leads to unmaintainable code. Recent research by Ackhoff in fact shows that more highly coupled systems are more maintainable than decoupled systems, because the coupling provides a trail of bread crumbs to support impact-of-change analysis. If you’re programming microservices with the pretense that each can be evolved independently, you find that such wishes are fantasy. The loose coupling makes it difficult to find what other services should suffer change to coordinate with a change local to the evolving service.

DCI of course skirts this with multiple overlapping or orthogonal module structures. An object is always participating in multiple modules: for starters, a Role and its Context, and its Class. This cross-cutting captures essential complexity. The goal is not to reduce complexity, but to capture and manage it. Modularity almost always seeks to reduce complexity.

4. I don’t think hierarchy is a principle. One of the main challenges of OO over the years has been a quest to destroy hierarchy; see the Elephant paper. Systems are much more complicated than a hierarchical model affords.
> --
> You received this message because you are subscribed to the Google Groups "object-composition" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to object-composit...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/object-composition/68396f1c-2135-4bff-af0e-18b1b7bf2bf5%40gmail.com.

Egon Elbre

unread,
Nov 29, 2023, 9:49:37 AM11/29/23
to object-composition
Could be relevant -- I remembered an article on examining the what people find essential in object orientation:
(Also see the references)

> The question is, which ones are still useful today?

For the code perspective, I would probably start from the humane side first; e.g. something along the lines of

* principle of least astonishment
* reasonable working memory load
* principle of locality
* clarity of meaning
* semantic compression

There are many ways of achieving these properties; and they do have a tension between them.
Similarly you can use a single tool (e.g. inheritance) to make these properties both better and worse.

But, I haven't completely thought through this list at the moment. So, there might be a better formulation of it.

James Coplien

unread,
Nov 29, 2023, 10:03:48 AM11/29/23
to object-co...@googlegroups.com

On Nov 29, 2023, at 15:49, Egon Elbre <egon...@gmail.com> wrote:

For the code perspective, I would probably start from the humane side first; e.g. something along the lines of

* principle of least astonishment

Show me some research that justifies this is so, or a formulation that it is even a distinct design goal of the paradigm.


* reasonable working memory load

Ditto. This is the goal of any design paradigm.

* principle of locality

Well, DCI kind of shows that it failed at that.


* clarity of meaning

Shallow— that is just semantics. Most software problems are hermeneutical.


* semantic compression

Procedural APIs provide a very nice compression of the semantics of an algorithm.

Egon Elbre

unread,
Nov 29, 2023, 10:37:42 AM11/29/23
to object-composition
To clarify, I didn't intend these as being the principles or design goal of OOP.

They were meant more as general overarching goals, regardless of which paradigm you end up using.

If the goal is to teach junior developers, then stating "you can keep only a limited amount of stuff in your head at a time; so don't try to cram as much ideas onto one page as possible", is a good start. Then it's possible to discuss different ways of countering that limitation.

Also, I don't have research to back these ideas up.

On Wednesday, 29 November 2023 at 17:03:48 UTC+2 Cope wrote:

On Nov 29, 2023, at 15:49, Egon Elbre wrote:

For the code perspective, I would probably start from the humane side first; e.g. something along the lines of

* principle of least astonishment

Show me some research that justifies this is so, or a formulation that it is even a distinct design goal of the paradigm.

I would say one part of this is idioms and usual ways of writing code.

e.g. if you end up switching how you write your for loop every time, you end up spending more effort in trying to understand things.

for(size_t i = 0; i < N; i++) {}
for(size_t i = 0; N > i; i++) {}
for(size_t i = 0; N >= i + 1; i++) {}

The other side, for example, when you know the system behavior then you would expect to find a similar description in code form.

* reasonable working memory load

Ditto. This is the goal of any design paradigm.

While it's a goal of any design paradigm, you can still use paradigms in a way that breaks this goal.

For example, rigidly sticking to single-responsibility principle could split highly coupled things into different files (or worse repositories) making it more difficult to work with them. This would worsen both locality of information and working memory load.

* principle of locality

Well, DCI kind of shows that it failed at that.

Oh, definitely.
* clarity of meaning

Shallow— that is just semantics. Most software problems are hermeneutical.


True. But, semantics can also be clear and unclear.

Similarly, if the code ends up with different types `User1`, `User2`, `User3` in a small codebase instead of naming them account, credentials, demographics... then I would consider that a significant problem.

Matthew Browne

unread,
Nov 29, 2023, 4:59:27 PM11/29/23
to object-co...@googlegroups.com

Thanks for the responses!

@Cope

Abstraction is evil
I remember many past comments on this group where you said that, and I actually already read "Abstraction Descant" to get a better understanding of abstraction vs. compression—although that seems not to be the main focus of that particular essay, which mostly talks about how abstraction becomes a problem when taken too far. (I should go back and read his earlier essay in the same book about compression.)

I liked the restaurant analogy of abstraction vs. compression in this short article, which I think echoes some of the concepts you've mentioned here before and also in the Lean Architecture book.

An interesting thing I noticed when looking at various OOP resources talking about "abstraction" is that different people seem to define it in different ways. A note before I continue: I'm not seeking to debate you on this (which I'm certainly not qualified to do), but simply to understand it better, and also to point out that there seem to be a variety of definitions even in the more academic literature. Some definitions even seem to use it to mean the same thing as how you and Gabriel define compression, i.e. a shorthand to compress shared understanding without actually throwing any information away. For example, Booch defines abstraction as being relative to the perspective of the viewer, which seems compatible with the idea of compression if I understand correctly (illustration below from his book):

There also seem to be some cases where throwing information away is actually desirable, e.g. when interacting with a relational database you can run SELECT and UPDATE queries without having to think about how those are implemented...but I suppose that's a different case because it's at the system level rather than objects within one system.

Setting aside possibly incorrect definitions of abstraction, if compression is what we want, the more important question is: it is an essential principle of object orientation (recognizing that it also is a good practice for any paradigm), or more of a secondary principle that would be better to introduce to new developers a bit later?

The number one pillar of OO is messaging: anything else is beyond secondary
Could you explain this a bit more? Messaging is definitely a foundational principle and I have found thinking in terms of messaging quite helpful. I remember some past exchanges between you and Trygve where Trygve was emphasizing that there's an important distinction between sending a message to an object and simply calling a method on an object. I got the impression that he was actually more concerned about this than you were.

In practice, I rarely implement any objects where the message sent to an object doesn't exactly match the name of the method that's responsible for handling that message, but there have been some rare exceptions. For example, consider a UserRepository object that can handle various 'find' messages like 'findByName', 'findByEmail', 'findByZipCode', etc - rather than writing separate methods for each of those, there could be a single method that handles all of those (delegating to helper or utility functions as needed). In Smalltalk this would be handled by writing a 'doesNotUnderstand:' method to tell it what to do with the unrecognized message (not all so-called OO languages support such a feature, which Trygve criticized, I think rightly so if we want to be true to the original OO vision).

So the distinction between messages and methods is one thing, but I don't know how much that ties into your emphasis on messaging. While the distinction is nice to have, I think we can certainly still write OO code and think in OO without it, as long as we have objects that contain both state (or immutable data) and behavior that can communicate with each other. Am I on the mark here?

There's more to discuss, but this message is already getting too long so I'll stop here for now.

Thanks,
Matt

Raoul Duke

unread,
Nov 29, 2023, 7:46:16 PM11/29/23
to object-co...@googlegroups.com
€0.02

re: messaging vs. invoking methods -> yeah i think mainly the advantage is the flexibility of an interface that isn't statically compiled into stone. of "bind as late as possible. then bind a little later than that." cf. alan kay. :-) cf. paul graham lisp debugger realtime fixing viaweb bugs. ...tho it can *really* suck to try to debug or maintain.

re: abstraction making initial development supposedly easier vs. hampering long term maintenance & understandability -> i think there cannot be a single static ascii solution. we need to be able to see edit run manipulate investigate watch debug query poke the code from several different representations. some textual refactoring eg linearize an inheritance tree so i can grok the code in one view rather than N disparate files; visual charts graphs flowcharts heatmaps upu name it; etc. etc. etc. 

Matthew Browne

unread,
Nov 29, 2023, 8:51:44 PM11/29/23
to object-co...@googlegroups.com

Matthew Browne

unread,
Nov 29, 2023, 9:41:08 PM11/29/23
to object-co...@googlegroups.com, Egon Elbre
On 11/29/23 9:49 AM, Egon Elbre wrote:
Could be relevant -- I remembered an article on examining the what people find essential in object orientation:
(Also see the references)

Kind of funny that inheritance made the top of their list at the time ;)

Still an interesting article though, thanks for sharing.

James Coplien

unread,
Nov 30, 2023, 8:28:50 AM11/30/23
to object-co...@googlegroups.com

On Nov 29, 2023, at 22:59, Matthew Browne <mbro...@gmail.com> wrote:

There also seem to be some cases where throwing information away is actually desirable, e.g. when interacting with a relational database you can run SELECT and UPDATE queries without having to think about how those are implemented...but I suppose that's a different case because it's at the system level rather than objects within one system.

That’s what I mean by “abstraction boundary.”

But the code on the other side of that boundary is not abstract. It’s real code. And to the degree that it’s abstract instead of compressed, the user of that API makes possibly incorrect assumptions about the semantics.

Setting aside possibly incorrect definitions of abstraction, if compression is what we want, the more important question is: it is an essential principle of object orientation (recognizing that it also is a good practice for any paradigm), or more of a secondary principle that would be better to introduce to new developers a bit later?

Is coding an essential part of object orientation?

Was Kant’s philosophy object-oriented?

Are design and implementation central to object orientation?

You can always justify some answer as the correct one, even though it may be only *a* correct one.

The central principle of object orientation is messaging. There are 3,753 others, all of them secondary, at best, to this first principle.

James Coplien

unread,
Nov 30, 2023, 8:31:11 AM11/30/23
to object-co...@googlegroups.com


On Nov 30, 2023, at 01:46, Raoul Duke <rao...@gmail.com> wrote:

we need to be able to see edit run manipulate investigate watch debug query poke the code from several different representations. 

That’s a process.

It applies especially to complex systems.

I think it is relevant to all software paradigms, since software almost always lands us in a complex domain.

I wouldn’t use it as a central principle of objects, anymore than having mass is a central principle of being a mammal.

Matthew Browne

unread,
Nov 30, 2023, 8:55:40 AM11/30/23
to object-co...@googlegroups.com

Hi Cope,
I'm curious, if we restrict the conversation to only "atomic event architectures" where DCI roles/contexts would be less relevant, would you say your views on OO have changed a lot since the late 90s?

I looked up one of the chapters of your book "Multi-Paradigm Design in C++" where you discussed object orientation to see if it would provide clarifications on some of the things you've been saying here, and it looks like you defined it quite differently at that time, although I'm sure at the core you had similar goals.

--
You received this message because you are subscribed to the Google Groups "object-composition" group.
To unsubscribe from this group and stop receiving emails from it, send an email to object-composit...@googlegroups.com.

James Coplien

unread,
Dec 1, 2023, 11:20:58 AM12/1/23
to object-co...@googlegroups.com

On Nov 30, 2023, at 14:55, Matthew Browne <mbro...@gmail.com> wrote:

Hi Cope,
I'm curious, if we restrict the conversation to only "atomic event architectures" where DCI roles/contexts would be less relevant, would you say your views on OO have changed a lot since the late 90s

Yes, a lot.

I looked up one of the chapters of your book "Multi-Paradigm Design in C++" where you discussed object orientation to see if it would provide clarifications on some of the things you've been saying here, and it looks like you defined it quite differently at that time, although I'm sure at the core you had similar goals.

That was the popular interpretation of OO at the time. Timm Budd — also an MPD guy — had a slightly different interpretation based on the computational model, but I don’t think that it entailed messaging. To me, the computational model argument was always pretty silly because, in the end, it’s still a Von Neumann machine.

Many OO / Smalltalk advocates of the old days were trying to sell the idea that objects were independent fiefdoms, each with their own thread of control. Indeed, some (esp. Dahl and Nygård) thought of it that way, but programs still almost always ran on a single-threaded machine. I doubt that even today’s multicore architectures and their compilers strive to map objects to individual processors: it’s all just an attempt at an illusion that creates accidental complexity. I think that all such arguments can be understood in retrospect to try to approximate messaging, but even the high priests of OO of the day didn’t have the vocabulary or breadth of perspective to articulate that.

Matthew Browne

unread,
Dec 1, 2023, 6:18:34 PM12/1/23
to object-co...@googlegroups.com

Hi Cope et al.,
I think I'm finally beginning to understand why so many of the folks familiar with early OO history (including Kay himself) emphasize messaging so much... (I also learned that Kay apparently regretted naming it "object-oriented" because "objects", while important, put too much emphasis on the objects rather than the messages.)

Feel free to let me know if I'm off-base on any of this; below is a description of what I'm gathering from my research.

It's much bigger than what I mentioned before about the technical distinction between a method call and a message: it's about designing good public interfaces and effective communication between objects.

Consider this illustration, copied from the book Practical Object-Oriented Design: An Agile Primer Using Ruby by Sandi Metz:

The application on the left is basically meant to represent spaghetti code - objects calling methods on each other that probably shouldn't even be part of their public interface - overly coupled and a lack of encapsulation. And to quote from the book, "The second application is composed of pluggable, component-like objects. Each reveals as little about itself, and knows as little about others, as possible." (Of course some systems might be legitimately complex enough to look more like the one on the left even when well-designed, but these are just examples.)

So this is why the conceptual difference between a message and a method call is so important: messages respect the boundaries of objects and take an "ask for what, instead of telling how" approach. Even if we mark internal helper methods as private, that doesn't mean we've necessarily done a good job of designing a public interface based on a mental model of how one object should request something from another.

Another quote:

Instead of deciding on a class and then figuring out its responsibilities, you are now deciding on a message and figuring out where to send it.

This transition from class-based design to message-based design is a turning point in your design career. The message-based perspective yields more flexible applications than does the class-based perspective. Changing the fundamental design question from “I know I need this class, what should it do?” to “I need to send this message, who should respond to it?” is the first step in that direction.


And of course these concepts are very present in the Actor Model, which we've discussed on this group before. (message continues below the quote):

On 12/1/23 11:20 AM, James Coplien wrote:

Many OO / Smalltalk advocates of the old days were trying to sell the idea that objects were independent fiefdoms, each with their own thread of control.

I assume those ideas shared a lot of similarities with the actor model.

It was actually this article, "Erlang is the Most Object Oriented Language" that led me to the book I quoted above.

From the article:

If we take Alan Kay’s definition of object orientation as canonical, then Erlang fits almost perfectly. Instead of “objects” per say, Erlang uses “processes” that are fully isolated constructs that can only communicate with each through message passing.

Obviously we don't get that with Ruby, JavaScript, Java, C#, or even Smalltalk (at least in a non-distributed system), i.e. objects are not "processes" in their own right. (Note: I'm not literally talking about OS processes here, but I am talking about concurrency, among other things.) But that doesn't make the concept of message passing any less helpful in those languages. However, it does beg the question of whether we're really doing "message-oriented programming" (which seems like it would have been a better name for OOP) if we don't at least have either a message bus or an environment similar to Erlang.

As interesting as all of this is, getting back to my original goal when I started this thread, I wonder how relevant teaching fully "message-oriented programming" is for those working in today's most popular programming languages, or even in a non-distributed program written in trygve or other similar DCI system (which we could perhaps call "DCI classic"—I certainly think DCI could be extended, but I haven't seen many real examples of that yet). Based on what I described above, I think that messaging could still be a super important concept to teach for effective programming of a frontend TypeScript web application (which is my focus), but the kind of "messaging" possible there probably falls well short of what early OO thinkers were dreaming of.

--
You received this message because you are subscribed to the Google Groups "object-composition" group.
To unsubscribe from this group and stop receiving emails from it, send an email to object-composit...@googlegroups.com.

Raoul Duke

unread,
Dec 1, 2023, 6:30:26 PM12/1/23
to object-co...@googlegroups.com
€0.02

i feel like it is a gradient a spectrum you can sorta do the same thing in paradigm A in paradigm B. tho as DCI implementations show it ain't always petty. turing-schmuring completeness. 

so i fail to see how "ask don't tell" cannot be done with regular method invocations, all the java oop gurus recommended it too???

messaging vs. method invocation doesn't have to have much difference. objc has interception. of course swift is doing away with that for performance reasons. 

gradient is concerns like:
* sync vs. async. 
* call/return vs. pub/sub/etc. 
* single (logical, physical) thread vs. multi. 
* suitability to remoting/distribution. 
* how much it leads you to object capability style. 
* can you easily de/serialize state?
* nominal vs. duck typing. 
* ease of proxying/mocking. 
* ease of late binding / intercepting / dynamic linking. 
* robustness a la erlang otp. 


Matthew Browne

unread,
Dec 1, 2023, 6:45:38 PM12/1/23
to object-co...@googlegroups.com
On 12/1/23 6:30 PM, Raoul Duke wrote:
so i fail to see how "ask don't tell" cannot be done with regular method invocations, all the java oop gurus recommended it too???

I agree. Basically what I was trying to say is that my earlier message missed the point: the important thing is the concept of messaging. Yes, one can make an argument that Smalltalk is more purely OO than Java because it has 'doesNotUnderstand' which decouples method calls from messages, but in practice in Smalltalk the messages almost always map directly to the methods anyway.

Whereas thinking in messages (at a basic level at least) can of course be done in almost any language, and certainly quite nicely in Java and other modern "OOP" languages.

--
You received this message because you are subscribed to the Google Groups "object-composition" group.
To unsubscribe from this group and stop receiving emails from it, send an email to object-composit...@googlegroups.com.

Matthew Browne

unread,
Dec 1, 2023, 7:08:17 PM12/1/23
to object-co...@googlegroups.com
Having said that, I still think that something like 'doesNotUnderstand' is a feature that OO languages should ideally have (and a number of them do of course). But IMO it's not a deal-breaker.

Dom

unread,
Dec 1, 2023, 7:25:30 PM12/1/23
to object-composition
If we start with the perspective of objects having to send messages, the possibility of the message being sent to something that doesn’t understand it seems almost inevitable. If we flip things around and think about objects that want to know things, that want to “receive” certain messages, and then construct systems with this in mind then “I don’t understand” goes away.

Dom

unread,
Dec 1, 2023, 7:28:11 PM12/1/23
to object-composition
What is it about messaging and that requires a MOP?

Dom

unread,
Dec 1, 2023, 8:02:00 PM12/1/23
to object-composition
When a function returns a value, it doesn’t have to worry about where it is going. What if we think of messages being more like return values rather than like method calls.

We sometimes talk about calls being polymorphic (the recipient of the call handles it as it sees fit), but returns are polymorphic too. A value leaves a function and goes to whatever the caller happened to be (the returning function doesn’t know). The caller then does what it wants with the value it wanted.

Whilst we talk about worrying where a polymorphic call will go (hypergalactic goto), I don’t hear people taking the called function’s perspective and wondering where on earth the return is going to go.

This is the difference between making calls to try to make something external happen vs returning/producing values to be consumed by something.

Different sides of a coin, but the way we think about them is different.

James Coplien

unread,
Dec 2, 2023, 5:33:32 AM12/2/23
to object-co...@googlegroups.com


On Dec 2, 2023, at 00:18, Matthew Browne <mbro...@gmail.com> wrote:

The application on the left is basically meant to represent spaghetti code - objects calling methods on each other that probably shouldn't even be part of their public interface - overly coupled and a lack of encapsulation. And to quote from the book, "The second application is composed of pluggable, component-like objects. Each reveals as little about itself, and knows as little about others, as possible." (Of course some systems might be legitimately complex enough to look more like the one on the left even when well-designed, but these are just examples.)

I don’t buy it at all.

The world is complex and does lend itself either to tidy interfaces nor clean interaction diagrams. To me, the left is OO — the right is procedural programming in object wrappers. To achieve the right-hand picture, it is necessary to think beyond the interfaces and to think of the global structure. Object-oriented thinking is about thinking about local structure, propping it up as necessary to achieve the overall computation (see Beck, “Think like an Object.”)

I have always felt this, independent of DCI.

The picture on the right has hierarchy. There is no hierarchy in nature.

Kay says that the only software system he has seen with these properties is the Internet. It looks like the left-hand picture.

Clean organizational designers like nice organizational charts like the one on the right. If you look at what happens empirically it looks like the picture on the left. That is the nature of complex systems.

Have a look at selfOrganization.k in trygve. It is both the most object-oriented program I have ever written, and illustrates an example of how object-oriented interactions work.

James Coplien

unread,
Dec 2, 2023, 5:35:04 AM12/2/23
to object-co...@googlegroups.com
On Dec 2, 2023, at 00:18, Matthew Browne <mbro...@gmail.com> wrote:

Instead of deciding on a class and then figuring out its responsibilities, you are now deciding on a message and figuring out where to send it.

This transition from class-based design to message-based design is a turning point in your design career. The message-based perspective yields more flexible applications than does the class-based perspective. Changing the fundamental design question from “I know I need this class, what should it do?” to “I need to send this message, who should respond to it?” is the first step in that direction.

CRC cards...

James Coplien

unread,
Dec 2, 2023, 5:37:40 AM12/2/23
to object-co...@googlegroups.com


On Dec 2, 2023, at 01:25, Dom <dom.spi...@gmail.com> wrote:

If we start with the perspective of objects having to send messages, the possibility of the message being sent to something that doesn’t understand it seems almost inevitable. 

But this happens even in C++ and Java. It’s just that the compiler catches it at compile time.

One of the very clever things about trygve is that its type system catches such things while still giving all the flexibility you would expect.

Matthew Browne

unread,
Dec 2, 2023, 6:58:00 AM12/2/23
to object-co...@googlegroups.com

Hi Cope,
You're right of course that real object-oriented systems rarely result in neat-looking interaction pathways, but I think I didn't explain the illustration well enough...

Let's call these application 1 and application 2:

(figure A)


In Application 1, the public interfaces of the objects expose too many implementation details (here "aTrip" knows lots of implementation details of "aMechanic"):



(figure B)


Contrast with application 2:



(figure C)

The concept of designing good interfaces for a network of collaborating objects isn't new to me of course, I just hadn't thought of it in terms of "messaging" before (although "network of collaborating objects" always requires messaging of course).

Perhaps the main failure of the first illustration (figure A) is that the author seems to be mainly thinking of interfaces and classes rather than individual objects at run-time—as soon as we shift our perspective from compile-time to run-time, I think things will always start looking more like the diagram for application 1.

Have a look at selfOrganization.k in trygve. It is both the most object-oriented program I have ever written, and illustrates an example of how object-oriented interactions work.

I will have to check that out in more detail; thank you!

--
You received this message because you are subscribed to the Google Groups "object-composition" group.
To unsubscribe from this group and stop receiving emails from it, send an email to object-composit...@googlegroups.com.

Matthew Browne

unread,
Dec 2, 2023, 2:08:03 PM12/2/23
to object-co...@googlegroups.com
On Fri, Dec 1, 2023, 8:02 PM Dom <dom.spi...@gmail.com> wrote:
When a function returns a value, it doesn’t have to worry about where it is going. What if we think of messages being more like return values rather than like method calls.

The idea of the sender not worrying where the message is going is interesting. But I suppose one difference with function return values is that you would never want the return value to go somewhere other than back to the requester (or at least, I can't think of a reason why you'd want that).


James Coplien

unread,
Dec 2, 2023, 3:39:56 PM12/2/23
to object-co...@googlegroups.com
Ah, bubbles and arrows.

I’ve always liked Bertrand Meyer. He notes that the thing about bubbles and arrows is that they don’t crash.

CS people are enamored with their own idealistic world models, including those of coupling and cohesion. In telecom with have the 7-level OSI model of protocols (physical layer, network layer, transport layer, etc.) It’s bullshit. To effect high availability through error recovery, errors that can be detected at, like, level four, have to be mitigated at level two. The APIs are not conducive to corrective actions That means that the code at level four needs to be able directly to access stuff down at level two and sometimes level one. Further, the higher protocol levels are all wrong. So high-reliability telecom code largely ignores the OSI model.

Complex systems have complex interactions, so the connectivity graph is pretty intense. There tend to be local communities of objects and the overall structure is more or less fractal in nature, except that it’s more of a lattice than the somewhat hierarchical structure of fractals.

If you did an empirical model of a real program, you would find the interactions much more complex than your picture shows. The picture is a procedural picture (“for each bicycle”) rather than an object picture (with N bicycles). In OO-ese it is more a class picture than an object picture. Proper understanding comes in terms of objects instead of classes — again, see the self-organizing game.

You’re preoccupied with interfaces. Which interface? The interface that the base class offers to its derived class? The interface that object X presents to object Y for all X and Y (you can’t just take the union of all of these, that’s meaningless)? The instance method interfaces? Class methods? Of just the object’s declared class, or the transitive closure of the interfaces of all base classes? If one method overrides a base class method, are both part of the interfaces (answer: yes, because you can still invoke the super method from within the object). If one method hides a superclass method, are both part of the interface? MOP operations on Object? Outside of DCI I don’t know how to reason about object interfaces at run-time, unless:

1. One artificially restricts them by restricting the size of a class’s public interface, which I think is ridiculous;
2. You restrict object interfaces to be the class interfaces

This is not as cut and dried as the 1980s class-oriented-programming people would have it.

Dominic Robinson

unread,
Dec 4, 2023, 8:28:33 AM12/4/23
to object-co...@googlegroups.com
On Sat, Dec 2, 2023, at 7:07 PM, Matthew Browne wrote:
On Fri, Dec 1, 2023, 8:02 PM Dom <dom.spi...@gmail.com> wrote:
When a function returns a value, it doesn’t have to worry about where it is going. What if we think of messages being more like return values rather than like method calls.

The idea of the sender not worrying where the message is going is interesting. But I suppose one difference with function return values is that you would never want the return value to go somewhere other than back to the requester (or at least, I can't think of a reason why you'd want that).

Yes, I would agree that we want return values to go back to the caller because that is a part of the function call... return control abstraction.

What I'm wondering is if there might be a form of "messaging" with a similar "I don't have to worry where this value (the message) is going" property.

Returning to call... return for a moment.  This is just a control abstraction (borrowing Dick Gabriel's term from Abstraction Descant) like any other. It had to be invented. There is nothing special about beyond its ubiquity and undoubted utility.  Nowadays almost all CPUs have direct hardware support for a stack and call/return instructions.  This hasn't always been the case.  Some CPUs had a "return" register that would hold a single return address for a procedure call, but there was no stack unless this was implemented in surrounding software.

Back in the '80s when I was writing video games in Z80 assembler the most efficient memory access instructions were those that used the hardware stack. It was a waste to only use these only for function calls so a lot of code didn't use calls and returns. Instead you would "call" a function by loading the "return" address into a register and jumping to the start of the function. The first instruction in the function would then write the return address into the operand field of a jump instruction at the end of the function (self-modifying code) to free up the register.  This wouldn't work for anything recursive or multi-threaded, but that was rare.

Where I am going with this is that it can be instructive to deconstruct things that we take for granted and look at what is really going on. The familiar features of typical languages exert an enormous anchoring bias. We must test our assumptions.

There is a wealth of other art to look at - erlang's messages, scheme's delimited continuations (call/cc) and more recent ideas around algebraic effects (Koka, Haskell and elsewhere).

My feeling is that there are, possibly several, different control abstractions underneath this slippery concept of "messages". These can yield different options for how we compose pieces of software.

In reaching for insight as to what this might mean we should not limit ourselves to trying to describe messaging only in terms of the incumbent control abstractions in typical languages.

I don't think that "messages" are the method calls that we see in languages such as C++, java or even trygve.


Dominic Robinson

unread,
Dec 4, 2023, 9:27:55 AM12/4/23
to object-co...@googlegroups.com
On Sat, Dec 2, 2023, at 10:37 AM, James Coplien wrote:

On Dec 2, 2023, at 01:25, Dom <dom.spi...@gmail.com> wrote:

If we start with the perspective of objects having to send messages, the possibility of the message being sent to something that doesn’t understand it seems almost inevitable. 

But this happens even in C++ and Java. It’s just that the compiler catches it at compile time.

The compilers can ensure type safety, but unless you write programs in such a way that the type system guarantees that an inappropriate (out of contract) call can never happen, there are still the run-time cases where a call. or arguments passed to it, are contextually inappropriate and cannot be correctly understood.

So it depends on how you choose to define "not understood".

With stateful objects exposing methods that can be called externally called, such out of context/inconsistent uses of methods are a major source of problems.

One of the very clever things about trygve is that its type system catches such things while still giving all the flexibility you would expect.

Its duck typed roles?


Raoul Duke

unread,
Dec 4, 2023, 10:18:00 AM12/4/23
to object-co...@googlegroups.com

James O Coplien

unread,
Dec 4, 2023, 10:38:34 AM12/4/23
to object-co...@googlegroups.com


On Dec 4, 2023, at 14:28, Dominic Robinson <d...@spikeisland.com> wrote:

I don't think that "messages" are the method calls that we see in languages such as C++, java or even trygve.

I have mixed feelings about this, depending on which side of the bed I get out of in the morning.

I think that most (but not all) of messaging is in the mind of the programmer rather than in the mechanics or semantics of the programming language. You can’t do anything in an OO language that can’t be done with one (and here’s the sticking point: or more) Von Neuman machines. And the converse is true. That means it’s all about intentionality and mental models.

The first C++ (then C84) programs I worked on were in a C++ variant called C++/P. Each object executed in a named domain, and its individual methods could choose to execute either in that domain or in another domain. There were rules about which domains could access which other domains, so we were able to implement a rather find-grained dual of capability machines. Some of the checking was done at compkle time, but the true guarantees were at run time. The goal was to define domaiins for different invasive clients of a secure system, giving each client select capabilities to extend the system (customer programmability).

The C++/P objects had member functions that could behave either like member functions or like methods. Interactions between objects of most classes behaved as one would expect in C++. If a class was derived from class Object, then invocations on public member functions of that class would be served not by the compiler and linker but by an Object Manager, which would invoke the method of an object on the same processor. That supported complete independence between caller and callee, and facilitated run-time update of Object classes. I think that the invocations were synchronoux. So we had CORBA before CORBA.

If a class was derived from class Task, its methods were name-served as with subclasses of Object, but the invocations were asynchronous. All such methods had to be typed void and allowed no return values. The computational model was: send and forget. I used to think that this is close to what messaging is about, and it is certainly part of what people sometimes think about when they use the term. I thiink that’s a half-truth. I think that most people think of the temporal asynchrony (shades of Simula 67) as being fundmantal. Today, I think of the lookup mechanism as being fundamental. So both our “Object objects” and our “Task objects” supported what today I would call messaging. There’s a lot of architectural head-holding in figuring out where a message should go at run time.

There were no returns from Task methods, and using return values for non-commodity objects was considered bad practice. You could send a message to an object (just ordinary C++ member function invocation syntax) and expect the object to invoke one of your methods in return to supply a requested object. The standard practice was to pass “this” as a parameter designating the object to which the result should be returned. (Architecturally, “this” was a multi-word construct that included a processor ID. The system implemented a network f communicating PDP-11 pizza boxes, each one of which was a telephone set. The processors were connected by Ethernet. So we also probably had the first IP telephone network in the world.)

Back to Dominic’s statement: Based on similar reasoning, I view trygve methods as using messaging. You can’t imagine what code the compiler generates to figure out where to send a message at run time. Much of trygve was an experiment in designing an object virutal machine, and it’s a wondrous thing. The stack frames are different for role methods than then they are for class instance methods — but you don’t know until run time which kind of receiver you have! And you can’t reasonably put the stack-packing logic into the receiver. Context methods use the same stack frame layout as classes — unless the Context object is itself a role-player, in which case it uses the alternative format. (The stack frame includes a Context pointer, for example, in addition to this, the actual parameters, the return address, the frame pointer, etc.). The decision for how to push stack contents (on the calling side) and how to unpack the stack (on the receiving side) is made at run time, since any given point of call can use either stack format. How to pop the stack on return is an even more dicey issue, since the message receiver has not know whether to pop off the Context pointer or not.

To pull this off requires a deep degree of reflection, though it’s not explicit at the source language level. C++ has a tiny bit of it, and if you use the right disciplines you can do real OO in C++. (Again, nothing here that isn’t Turing complete, You can do it in FORTRAN too but the code becomes unreadable.) You don’t have to be so disciplined — or thorough — in trygve or the other DCI languages. They support an abstraction boundary so I don’t have to worry about the dispatching semantics, at all. Most of the time I don’t have to worry about them in C++, but there are times when the static type system prevents the kind of flexibility that true OO messaging supports (e.g. things akin to duck typing).

The trygve language is really about ten times as complex as it seems at first glance, and I take a lot of satisfaction from having been able to hide that complexity from its users. If you can imagine it, it’s probably possible… There is a bunch of additional complexity that relates to analyzing static and dynamic chains (global classes, classes inside of classes, classes inside Contexts, classes inside methods, Contexts inside methods, Contexts inside Roles, Roles inside Contexts, Contexts inside Contexts….) that make the code pretty hairy, but these just adorn that basic messaging logic at the core of OO semantics.

There are those who maintain that asynchrony (in one’s mental model) is essential to messaging and to OO. I tend to disagree, but given the Simula legacy of Smalltalk I’d sure like to hear Alan speak explicitly to this. He seems to never speak to the issue, almost as though he were avoiding it.

It is all those mechanisms that are necessary to support DCI's flexibility. I call that object-oriented programming, and the mechanisms that support the dispatching, I call messaging. Smallltalk started moving away from the C++ vtbl architecture to something somewhat more interpretive — but they hever made it as far as trygve did. Or that C++/P did back in 1983. There’s really no difference between it and trygve in its Task dispatching semantics, except for the asynchrony, and except for the fact that a single point of call isn’t limited to the degree of reflection power necessary to bind the request as it was in C++/P.

James O Coplien

unread,
Dec 4, 2023, 10:50:04 AM12/4/23
to object-co...@googlegroups.com


On Dec 4, 2023, at 15:27, Dominic Robinson <d...@spikeisland.com> wrote:

The compilers can ensure type safety, but unless you write programs in such a way that the type system guarantees that an inappropriate (out of contract) call can never happen, there are still the run-time cases where a call. or arguments passed to it, are contextually inappropriate and cannot be correctly understood.

The trygve language does this in a way that gives it Smalltalk-level flexibility tempered with C++-like type safety based on duck typing.

It’s impossible to get MessageNotUnderstood, and any client can invoke any method through a Role identifier on any object whose interface satisfies the Role interface. The type system guarantees that a Role identifier can be bound only to objects whose class (or interface) satisifes the contract; and, of course, all of the other standard static type guarantees are there between interfaces and either Contexts or classes.

This gives incredible flexibility, and that’s what messaging is for. So I can have a client C inside Context X that invokes a method M of some serving object of class S by binding an instance of S to some role X.R; C invokes the method as R.M. I can add a whole new class S2 to the program with a method R whose signature is the same as S.R, — c class that knows nothing about C, X, or S — and arrange to bind one of its instances to X.R and invoke it just fine, with no adjustment to C, X or S. In C++ you need some kind of static type link tieing things together. In trygve, the only analogous type link is at the point where the S2 instance is bound to X.R — potentially indirectly.

James O Coplien

unread,
Dec 4, 2023, 10:52:54 AM12/4/23
to object-co...@googlegroups.com
On Dec 4, 2023, at 15:27, Dominic Robinson <d...@spikeisland.com> wrote:

So it depends on how you choose to define "not understood”

I mean it in the sense that it is impossible at compile time to arrange a run-time binding that is guaranteed not to be a surprise to the programmer.

Pretty strong.

Dominic Robinson

unread,
Dec 4, 2023, 5:50:39 PM12/4/23
to object-co...@googlegroups.com
On Mon, Dec 4, 2023, at 3:38 PM, James O Coplien wrote:

To pull this off requires a deep degree of reflection, though it’s not explicit at the source language level.

Is this what you were alluding to when you said that messaging requires a MOP?

That is to say that the implementation layer, whether compiled, interpreted/run-time, or both requires reflective facilities?

C++ has a tiny bit of it,

C++ has much stronger capabilities in this area than most people realise, but it isn’t at all obvious. You’d be surprised.

and if you use the right disciplines you can do real OO in C++.

Certainly pretty  much everything that trygve does and  some things it doesn’t.

(Again, nothing here that isn’t Turing complete, You can do it in FORTRAN too but the code becomes unreadable.)

You can always do pretty much anything if you are prepared to squint, but the challenge is to do things in such a way as to achieve compression from the abstractions you provide, ensure they sure they work predictably in combination, and feel natural to use both semantically and syntactically (not hideously ugly or requiring pages of typing) with the rest of the language.

You don’t have to be so disciplined — or thorough — in trygve or the other DCI languages. They support an abstraction boundary so I don’t have to worry about the dispatching semantics, at all. Most of the time I don’t have to worry about them in C++, but there are times when the static type system prevents the kind of flexibility that true OO messaging supports (e.g. things akin to duck typing).

Whilst C++ does require that the classes of objects that can play a role be subtypes of the declared role player type (not duck typed), I don’t think I’ve come across a situation where this causes a problem. There is an argument that the requirement to make this explicit through an abstract base type is a positive.

The trygve language is really about ten times as complex as it seems at first glance, and I take a lot of satisfaction from having been able to hide that complexity from its users. If you can imagine it, it’s probably possible…

Having built more than one C++ equivalent I think I can :).

There are those who maintain that asynchrony (in one’s mental model) is essential to messaging and to OO. I tend to disagree, but given the Simula legacy of Smalltalk I’d sure like to hear Alan speak explicitly to this. He seems to never speak to the issue, almost as though he were avoiding it.

It would be interesting from a historical point of view. Some of the early OO talks describe biological inspirations which, to me, seem necessarily asynchronous.

The asynchronous aspects become essential in some domains, at which point you have no option but to integrate them in some way. Your description of C++/P and Tasks resonates here.

Our current approach is to integrate both synchronous and asynchronous aspects in such a way that the two can be combined smoothly. You need to be aware of what is going on because they have different underlying models, but the code looks and feels very similar. You aren’t locked into one or the other.

Part of this is to make objects/contexts feel more autonomous. Even some of the synchronous code feels asynchronous (and can become so if necessary as the code evolves).

Both synchronised and asynchronous messaging bring their own, distinct, different problems and complexities. You need facilities to mitigate each so that you can use whichever best suits each part of the problem. I think you need both.

The event/continuation mechanism (another form of dispatch) I talked about back in January is a form of control abstraction that can provide very high levels of compression of control flow code.

It is all those mechanisms that are necessary to support DCI's flexibility. I call that object-oriented programming, and the mechanisms that support the dispatching, I call messaging.
 
O, I’ll buy that as a classification.


Dominic Robinson

unread,
Dec 4, 2023, 5:55:25 PM12/4/23
to object-co...@googlegroups.com
On Mon, Dec 4, 2023, at 3:17 PM, Raoul Duke wrote:
Thanks Raoul, this looks interesting…

Matthew Browne

unread,
Dec 4, 2023, 9:43:47 PM12/4/23
to object-co...@googlegroups.com

Hi Cope,
Based on the self-organization example, I would have thought you were basically saying that a system is only object-oriented if it has emergent behavior—if it were not for your more recent messages.

I know that Kay had ambitious long-term goals that weren't fully realized in Smalltalk, but I think that Smalltalk could still be a useful starting point to consider how well a language supports the messaging paradigm—even if the answer turns out to be "not very well" (although that would sound odd given that Kay himself was the lead creator of it).

I think this would be a good time to bring up your recent discussion with Marius about DCI—I wish I had watched it sooner (I was very busy at the time it was posted and forgot about it). It really helped me understand your points about abstraction and compression, and it seems that the contrast you made between the Dynabook vision and fully emergent systems (perhaps complimentary but different) is very relevant to what we're talking about.

The other point in that video is that at the end of the day, the most important goal of DCI is readable code. Is that also the most important goal of OO even before we bring DCI into the picture? I'm not so sure...with things like emergent systems or fully asychronous messaging, some aspects might become simpler, but others become much more complex. In some systems this is essential complexity of course (rather than accidental complexity), or maybe I'm misunderstanding and it's not necessarily any more complex, but my real goal is to define some concepts that will help even junior developers writing relatively simple programs to write better and more readable code that expresses a shared mental model of the system. I think that with the trygve language, you have succeeded in creating a very approachable language, but if I understand correctly then there are different levels of messaging we are talking about here. What makes the self-organization example special is that it's deliberately setting up objects in a way that's similar to how cells work in a biological organism. And of course not every trygve program will do that, or needs to do that.

In addition to better defining "messaging", maybe we need to back up and ask why messaging is so important, and in what contexts.

And just to get back and answer an earlier question you asked:

You’re preoccupied with interfaces. Which interface?

I simply meant the public methods of an object—or something roughly equivalent. These days I actually write React components more often than I write classes (or Contexts, sadly). For better or worse, modern-day React components are no longer defined using classes: they are defined using functions but they can still have internal state. In the case of React components, the public "interface" would be the "props" that the component accepts, which can optionally include callback functions to handle certain events.

Matthew Browne

unread,
Dec 4, 2023, 11:38:46 PM12/4/23
to object-co...@googlegroups.com

An interesting comment here about the Julia language:

Julia is fully object oriented. The core of OO is not to write a dot between an object and an associated function but that one has virtual functions. Multiple dispatch provides just that.

James O Coplien

unread,
Dec 5, 2023, 5:53:26 AM12/5/23
to object-co...@googlegroups.com
Dominic,

On Dec 4, 2023, at 23:49, Dominic Robinson <d...@spikeisland.com> wrote:

On Mon, Dec 4, 2023, at 3:38 PM, James O Coplien wrote:

To pull this off requires a deep degree of reflection, though it’s not explicit at the source language level. 

Is this what you were alluding to when you said that messaging requires a MOP?

That is to say that the implementation layer, whether compiled, interpreted/run-time, or both requires reflective facilities?

Yup.


C++ has a tiny bit of it,

C++ has much stronger capabilities in this area than most people realise, but it isn’t at all obvious. You’d be surprised.

Actually, I probably wouldn’t.

I don’t think you understand my history with C++.


and if you use the right disciplines you can do real OO in C++.

Certainly pretty  much everything that trygve does and  some things it doesn’t.

I always thought that you were someone who could think above the level of Turing machines. Why what you say is true at that level, it is hardly true at the level of code intentionality.

The trygve language was designed to explore DCI, not to provide a low-level object-oriented assembler (which is more or less what Stroustrup calls it) optimized for bitwise and bytewise operations, with a broken MI model, and with a template system that is Turing complete.

And trygve arguably supports DCI. Though the first running DCI-ish program in the world was some C++ code I wrote using metaprogramming, I don’t think you can do a satisfyingly general DCI implementation in C++. That is what this discussion is about. If you want to justify your intimations with an implementation you’re welcome to try.

The test would be whether an object of a new class can play a Role without the author of that class needing to be aware of the Role. That is necessary for, say, library vendors who want to make classes whose objects can play Roles in the code of their clients.

C++ creates a coupling hell out of this.


(Again, nothing here that isn’t Turing complete, You can do it in FORTRAN too but the code becomes unreadable.) 

You can always do pretty much anything if you are prepared to squint, but the challenge is to do things in such a way as to achieve compression from the abstractions you provide, ensure they sure they work predictably in combination, and feel natural to use both semantically and syntactically (not hideously ugly or requiring pages of typing) with the rest of the language.

Ah, eloquently said. You must still be stringing from the pain of doing so in C++.


You don’t have to be so disciplined — or thorough — in trygve or the other DCI languages. They support an abstraction boundary so I don’t have to worry about the dispatching semantics, at all. Most of the time I don’t have to worry about them in C++, but there are times when the static type system prevents the kind of flexibility that true OO messaging supports (e.g. things akin to duck typing).

Whilst C++ does require that the classes of objects that can play a role be subtypes of the declared role player type (not duck typed), I don’t think I’ve come across a situation where this causes a problem. There is an argument that the requirement to make this explicit through an abstract base type is a positive.'

That is the it-depends-what-side-of-the-bed-I-get-out-of-in-the-morning factor. But it does become a problem in “edge cases,” such as when you have to use MI to allow an object to play multiple roles. The fact that the typing is not duck typing, but standard static typing, really screws you on incrementality (as someone who builds build environments, you should know this.) If you create a Context-like construct to affect the intentionality of grouping related “roles,” the syntax becomes abhorrent.

I tried this stuff for years — it is even i the original DCI book — and I eventually gave up on it.


The trygve language is really about ten times as complex as it seems at first glance, and I take a lot of satisfaction from having been able to hide that complexity from its users. If you can imagine it, it’s probably possible… 

Having built more than one C++ equivalent I think I can :).

Are we talking Turing machines again?

There are those who maintain that asynchrony (in one’s mental model) is essential to messaging and to OO. I tend to disagree, but given the Simula legacy of Smalltalk I’d sure like to hear Alan speak explicitly to this. He seems to never speak to the issue, almost as though he were avoiding it.

It would be interesting from a historical point of view. Some of the early OO talks describe biological inspirations which, to me, seem necessarily asynchronous.

The asynchronous aspects become essential in some domains, at which point you have no option but to integrate them in some way.

Yes, and that is a very interesting point. The rub is that it takes much more than objects to solve it.

One thing I forgot to mention about our task methods is that they were atomic: run-to-completion. They could not block. That created a paradigm where semaphores were unnecessary. Either you institute something like run-to-completion semantics or you create the need for something like semaphores. Much of the semantics of Simula 67 was indeed in support of queues and semaphores (and it was only parallel and not asynchronous).

Your description of C++/P and Tasks resonates here.

Our current approach is to integrate both synchronous and asynchronous aspects in such a way that the two can be combined smoothly. You need to be aware of what is going on because they have different underlying models, but the code looks and feels very similar. You aren’t locked into one or the other.'

I think that’s what we did in C++/P.


Part of this is to make objects/contexts feel more autonomous. Even some of the synchronous code feels asynchronous (and can become so if necessary as the code evolves).

I think that was the root of the 1980s rhetoric about this. But it’s a misleading illusion.

Both synchronised and asynchronous messaging bring their own, distinct, different problems and complexities. You need facilities to mitigate each so that you can use whichever best suits each part of the problem. I think you need both.

C++/P just designed many of these problems out of the way with a suitable computational model. Telecommunications had been at this game of asynchronous computation for quite a few years (a century or so) and we kind of knew what we were doing. CS and university types get lost in their academic formalisms, and accidental complexity blooms like flowers in the Sprint.


The event/continuation mechanism (another form of dispatch) I talked about back in January is a form of control abstraction that can provide very high levels of compression of control flow code.

It is all those mechanisms that are necessary to support DCI's flexibility. I call that object-oriented programming, and the mechanisms that support the dispatching, I call messaging. 
 
O, I’ll buy that as a classification.

And you can’t do it eloquently in C++.




-- 
You received this message because you are subscribed to the Google Groups "object-composition" group.
To unsubscribe from this group and stop receiving emails from it, send an email to object-composit...@googlegroups.com.

James O Coplien

unread,
Dec 5, 2023, 5:56:25 AM12/5/23
to object-co...@googlegroups.com


On Dec 5, 2023, at 03:43, Matthew Browne <mbro...@gmail.com> wrote:

I simply meant the public methods of an object

Show me the interface of an object — in the code.

DCI is about understanding the code: that is its stated purpose.

You are welcome to your mental models of the objects. While DCI code supports a good mental model of objects, I think that the interface of an object in C++ is no different than the interface of a trygve object. There is nothing interesting about that.

James O Coplien

unread,
Dec 5, 2023, 5:57:16 AM12/5/23
to object-co...@googlegroups.com
On Dec 5, 2023, at 03:43, Matthew Browne <mbro...@gmail.com> wrote:

 would have thought you were basically saying that a system is only object-oriented if it has emergent behavior—if it were not for your more recent messages.

Well, that would fit with Kay’s characterization of the Internet as the only OO system in existence.

James O Coplien

unread,
Dec 5, 2023, 5:59:24 AM12/5/23
to object-co...@googlegroups.com
On Dec 5, 2023, at 03:43, Matthew Browne <mbro...@gmail.com> wrote:

if I understand correctly then there are different levels of messaging we are talking about here. What makes the self-organization example special is that it's deliberately setting up objects in a way that's similar to how cells work in a biological organism. 

There is something very deep going on here that I can’t yet get my mind around.

Delicious. Let me noodle on it a bit.

James O Coplien

unread,
Dec 5, 2023, 6:01:13 AM12/5/23
to object-co...@googlegroups.com


On Dec 5, 2023, at 05:38, Matthew Browne <mbro...@gmail.com> wrote:

Julia is fully object oriented. The core of OO is not to write a dot between an object and an associated function but that one has virtual functions. Multiple dispatch provides just that.

Now, enter Dominic, who will say you can do that in C++ with the Visitor pattern :-)

Matthew Browne

unread,
Dec 5, 2023, 7:54:36 AM12/5/23
to object-co...@googlegroups.com
On 12/5/23 5:53 AM, James O Coplien wrote:
Is this what you were alluding to when you said that messaging requires a MOP?

That is to say that the implementation layer, whether compiled, interpreted/run-time, or both requires reflective facilities?

Yup.

BTW for those following along... if you Google for "MOP", one of the things that comes up is "message oriented programming". Meta-Object Protocol also comes up of course. The above confirms that you were indeed referring to "Meta-Object Protocol". Might be obvious to most but I thought it would be good to point out (TBH I previously wasn't 100% sure if you were referring to Meta-Object Protocol or something else.)


James Coplien

unread,
Dec 5, 2023, 12:06:45 PM12/5/23
to object-co...@googlegroups.com


On Dec 5, 2023, at 13:54, Matthew Browne <mbro...@gmail.com> wrote:

 if you Google for "MOP", one of the things that comes up is "message oriented programming

Huh?

Dominic Robinson

unread,
Dec 5, 2023, 12:20:59 PM12/5/23
to object-co...@googlegroups.com
Awww. You beat me to it. I would have named the idiom as "double dispatch" though ;)

Like shooting fish in a barrel.

I liked CLOS' multi-methods for this back in the day.

James O Coplien

unread,
Dec 5, 2023, 12:28:21 PM12/5/23
to object-co...@googlegroups.com


On Dec 5, 2023, at 18:20, Dominic Robinson <d...@spikeisland.com> wrote:

I liked CLOS' multi-methods for this back in the day.

Yes. And note:

  • it was multiple dispatch and not just double dispatch;
  • it felt natural in CLOS and abhorrent in C++

I once was at a talk that Erich Gamma gave where he reflected on the two patterns they would remove from the book were they to do it again. One is singleton, because it leads to communication through global data. The other is Visitor, because “the only people who really use it are consultants when they are trying to impress their clients with how clever they are.”

My pasteurnet tool relies heavily multiple dispatch, and it’s really ugly in Objective-C.

You can do easily do double dispatch in trygve (one level of dispatch through the Role binding and another through inclusion polymorphism) (or,I guess, in any DCI language like Marvin, etc.) and it’s smooth as snot.

Dominic Robinson

unread,
Dec 5, 2023, 12:28:55 PM12/5/23
to object-co...@googlegroups.com
To agitate the noodles gently whilst they cook...

I found myself a little disappointed that a trygve implementation of  the self organization game contained only the one context. The game context has a number of data structures that capture aspects of the players' state and relationships. I was expecting to see contexts to capture the relationship between individual players and their targets.

Maybe this expectation comes from the more concurrent style of DCI that I have been writing which uses long lived contexts to capture ongoing inter-relationships/collaborations between objects.

How do you see this Cope?

Raoul Duke

unread,
Dec 5, 2023, 12:42:46 PM12/5/23
to object-co...@googlegroups.com
some random reactions $0.02:

* A pillar of anything should be developer experience DX like ux but for my day job coding. Double dispatch is not imhumbleo at all good dx. Multiple dispatch is also not good dx unless the static checkers are very good. That's because of the tension between scatter & gather. (Confessing, for me, C++ is pretty much never good dx; it might be the only tool to get the job done but that doesn't make it nice. True we must learn the lesson from c++ that ideally we must be developing in a multiparadigm supporting language, just wish for something with less infinite foot gun cruft.)

* Everything should start off in async by default and you should have to use special keywords to force any synchronous behavior. heh. 

* A single textual representation is a tyrannical dominant encoding/representation and our tools suck for not helping us be more freely powerfully flexible including scatter/gathering and diagramming (eg see Enso). shake fist at sky. 

* The most important thing is paying the bills. Which unfortunately means everybody cuts corners. I wish we thought the most important thing was communicating to other humans, such as my future self, or future other new maintainers. Then we would focus on how best to support mental model building.  (That's what i want from LLMs, is to have conversational programming documentation/development/refinement.)

* Oop in some views is better called the Actor/Erlang model, n'est pas?





James O Coplien

unread,
Dec 5, 2023, 1:57:54 PM12/5/23
to object-co...@googlegroups.com
We teach the game as a use case — a simple set of instructions that takes about 15 seconds. 

It’s a simple use case.

A Context is more or less a use case.

That’s my mental model. Maybe yours is different. Try coding it up — nothing like reality to hone the thinking and to keep one honest.

-- 
You received this message because you are subscribed to the Google Groups "object-composition" group.
To unsubscribe from this group and stop receiving emails from it, send an email to object-composit...@googlegroups.com.

James Coplien

unread,
Dec 5, 2023, 2:08:55 PM12/5/23
to object-co...@googlegroups.com

On Dec 5, 2023, at 18:42, Raoul Duke <rao...@gmail.com> wrote:

* A pillar of anything should be developer experience DX like ux but for my day job coding. Double dispatch is not imhumbleo at all good dx.

Multiple dispatch models several realistic phenomena.

The realistic hand-waving example that Bjarne used to give for object-oriented programming [sic] was drawing a Shape on a Window. You have a hierarchy of Shapes and a hierarchy of Windows… Well, you can probably guess what I would say next.


 C++ is pretty much never good dx;

UX is all about habit formation. The best tool is the one you know. I am probably no longer one of the top 10 C++ people in the world, but once was, so it’s in my blood. If it has foibles I can’t see them. Your experience may be different.


* Everything should start off in async by default and you should have to use special keywords to force any synchronous behavior. heh. 

Bullfeathers. I can think of nothing that is a worse fit for how the human mind works. Me, I use mainly my mind when I program. Maybe you’re one of those TDD guys who just do and do and do instead of think… :-)


* A single textual representation is a tyrannical dominant encoding/representation and our tools suck for not helping us be more freely powerfully flexible including scatter/gathering and diagramming (eg see Enso). shake fist at sky. 

Same for language and its limitations for written and spoken text. But society, and maybe even evolution, have shaped our minds so as to be optimized for such representations. Alternatives have never fared well.


* The most important thing is paying the bills. Which unfortunately means everybody cuts corners. I wish we thought the most important thing was communicating to other humans, such as my future self, or future other new maintainers. Then we would focus on how best to support mental model building.  (That's what i want from LLMs, is to have conversational programming documentation/development/refinement.)

Even when young I would never have said that the most important thing is paying the bils. Look how I’m wasting my time right here :-)


* Oop in some views is better called the Actor/Erlang model, n'est pas?

You might take some time to learn about elephants and blind men.

Matthew Browne

unread,
Dec 5, 2023, 9:54:14 PM12/5/23
to object-co...@googlegroups.com
On 12/5/23 2:08 PM, James Coplien wrote:
* Everything should start off in async by default and you should have to use special keywords to force any synchronous behavior. heh. 

Bullfeathers. I can think of nothing that is a worse fit for how the human mind works. Me, I use mainly my mind when I program. Maybe you’re one of those TDD guys who just do and do and do instead of think… :-)

I agree that "everything should start off in async by default" is going too far, but consider how in node.js, all I/O operations are async (I think the same might be true in Go). This led to some unnecessary complexities (callback pyramids) when using earlier versions of JS, but now that we have async/await, I don't find that it generally makes the code much more difficult to write or understand, and the async I/O certainly makes the system much more efficient.

The task methods in C++/P you mentioned sound interesting. I suppose there are multiple effective ways of reducing async complexity from the programmer's point of view, although of course when debugging etc. it's important to understand what's actually going on.

James Coplien

unread,
Dec 6, 2023, 7:41:59 AM12/6/23
to object-co...@googlegroups.com
In the old days (geez, I’m starting to sound like an old fart) we used simple, direct solutions. The modern era, taking a false cue from divide et impera, got out their parallism knife and started dividing up the world into “autonomous” parts. This divide et impera is based on the rather arbitrary and somewhat silly notion that we can manage change locally. The canonical example is that Stack should be encapsulated  so we can change its data structure and algorithms from a linked list to a simple array without the client having to know.

So today we have microservices, agents, and a host of other pieces of garbage that give the illusion of asynchrony, or which add layers of complexity across essentially distributed systems. Telecom survived for a century without such crap using simple, straightforward approaches.

One example is time scheduling (though I haven’t heard anyone go on about that in the past five years or so). People feel that they need process-like things so that specific tasks get their fair share of the processor. That in turn introduces synchronization. So we ended up with pseudo-asynchrony that allowed each process (or SOA service or CORBA object or task or microservice or whatever) to believe it had its own processor, and they interacted with queues and messages and the secret handshakes of P and V and a whole empire of celebrated CS technologyl

What we did in the old switching systems was just to put all processing inside of one giant loop that took work from five queues named A, B, C, D, and E. To get work done, you put the work on a suitable queue. The loop visited the A queue twice as often as the B queue; the B queue twice as often as the C queue; and the C queue twice as often as the D queue. So you did an A task, the B, then A, then C, then A, then D, then A, then B. Normally you could do this cycle of visitation in less than 10 ms, and the leftover time was used for E queue tasks, which was for things like long-running audits. If the switch go busy then the cycle would be more and more full; if the  ABACABAD cycle approached 10 ms, you’d restart the cycle at the beginning and the lower priority tasks would have to wait. There were other kinds of work-shedding that the switch could do if the E cycle went to zero.

This strategy is formally mathematically analyzable for network traffic engineering and for performance engineering and queue sizing for each switch. Overload is quantifiably detectable and the mitigations are provably sound.

One program counter and no asynchrony.

Try doing that in a priority-driven, pre-emptive resumption scheduler. (And if you don’t know what that is, it’s what most contemporary operating systems use.) Wait states. Queues. Semaphores.

This is just one strategy; run-to-completion C++/P tasks are yet another. (Note the similarity in that each of the ABACABAD tasks was nonblocking).

You may come back and argue that today’s world is no longer essentially sequential but that it is rather a multiprocessor world. First, telecom was there in the multiprocessor arena long before many of us were born. Second, we handled those things simply. And successful strategies still handle those things simply today. HTTP protocols are all send-and-forget: you *can* get a return value with stuff like client-side pull. but it is fragile and is rarely used. Its user experience in a world where network delays can be seconds or minutes is absolutely horrid. (I every now and then run into apps that assume instantanious network connectivity, especially from Apple. And the phone app you use here in Denmark to plan train travel uses autocompletion to guess what destination you have in mind when you are typing: the round-trip time to echo individual characters and to effect the autocompletion is on the order of seconds. It’s unusable. But those TDD bigots testing on their gigabit LANs never see this reality. Just screw the users so I can be a good programmer accountable for my work and only for my own work.)

Imagine if every Internet HTTP transaction needed a level-4 acknowledgment. (This also has something to do with the idiocy of the OSI stack, which I covered in an earlier post.)

This trend to separate areas of accountabilty (often wrongly touted as a separation of concerns) curiously correlates with a broad cultural migration to rugged individualism as opposed to teamwork or social cooperation. The famed social mores of Gen Y and Gen Z show an ever-downward trending level of social interaction (due in part, oddly enough, to the computerization of society). Conway runs rampant here and the architectural rhythms are dutifully following cultural norms. As someone who has explored the benefits of teamwork, I find this frightening.

This separation causes other problems. Spotify is proud of having divided its product up into 20 or so parts that synchronize with each other once a year (no, I’m not making this up). But each has its own UI and UX memes, really bogging down the user experience. (I just spent a week with John Pagonis, Greek UX expert, and we spent some time talking about this.)

I recently stumbled across something stunning in systems thinking from Russ Ackhoff. He maintains — correctly, I think — that system adaptability is proportional to the cohesion of a system. Cohesion (as defined by Constantine) is inversely proportional to decoupling. If you are going to make a system change you want to be able to understand and assess the impact of the change; coupling helps you do that. If an error shows up in a microservice, you don’t have a clue about where to start looking. Perhaps the hardest problem in programming is tracking (or even reproducing) bugs in a distributed system — and we celebrate distribution and asynchrony as though they were good things. We’re idiots in doing so.

And, indeed, in a complex system, locality falls apart. A telecom system has to be able to detect single- and double-bit memory parity errors, and must be able to recover from the single-bit ones. First, that requires redundant information, so almost no commercial data structure library will work in a fault-tolerant system. Second, it requires cross-auditing strategies that correlate data across multiple data structures at a very low level (literally, the bit level) — that code does not belong in Stack alone. If Stack  changes, that code changes. The whole bases of encapsulation are largely a psychological illusion.

Adding a virtual machine or computational model that introduces pseudo-asynchrony in a single threaded machine gives the illusion of autonomy and independence, but in fact adds accidental complexity that stymies adaptability. And the name of the game in software is adaptability.

If you look at where software dollars go, the lion’s share probably goes into defence, military, and telecommunications software — all fault-tolerant systems. You find them so reliable because they throw away naive CS principles and do things like this, as scientists. (Video games are one anomaly here in terms of revenues generated.)

So, in summary, you’ve made your own damned problems in embracing these frameworks. I’m sure it feels good — maybe because the folks who love it are nerds and its so techie with its secret handshakes and everything, or maybe it’s the Gen Z thing, or maybe because technology pushers needed something new to push and their market is always on the outlook for shiny new toys. To quip a local remark here, those things don’t pay the bills — not the psychological or sociological bills, let alone the economic ones.

Just today’s rant from an old man.


--
You received this message because you are subscribed to the Google Groups "object-composition" group.
To unsubscribe from this group and stop receiving emails from it, send an email to object-composit...@googlegroups.com.

Raoul Duke

unread,
Dec 6, 2023, 9:57:21 AM12/6/23
to object-co...@googlegroups.com
> Just today’s rant from an old man.

hear hear.

resonates with what i encounter in so-called modern software development, hence my desire for the mental model to be a priority. 

David Leangen

unread,
Dec 6, 2023, 3:55:28 PM12/6/23
to object-co...@googlegroups.com


This trend to separate areas of accountabilty (often wrongly touted as a separation of concerns) curiously correlates with a broad cultural migration to rugged individualism as opposed to teamwork or social cooperation.

Great observation! This resonates.


Matthew Browne

unread,
Dec 6, 2023, 10:34:24 PM12/6/23
to object-co...@googlegroups.com
On 12/6/23 7:41 AM, James Coplien wrote:
I recently stumbled across something stunning in systems thinking from Russ Ackhoff. He maintains — correctly, I think — that system adaptability is proportional to the cohesion of a system. Cohesion (as defined by Constantine) is inversely proportional to decoupling.

I think it could be helpful to be specific here about what type of coupling this is. I'm guessing you're referring to coupling within a module rather than between modules (unless you meant to say "inversely proportional to coupling" instead of "decoupling"?). In a cohesive module, the functionality is all closely related responsibilities, so I suppose that in a typical case you could end up with a lot of interdependencies between the internal functions of your module. But by increasing cohesion, the coupling between modules should go down because you're no longer depending on so many other modules—the functions you need to accomplish your goal are no longer scattered all over the codebase in different modules.

This also reminds me of how in DCI, Contexts are highly cohesive—you can read the whole use case in one place. This is one of my favorite benefits of DCI. And it does it without abandoning the idea of multiple objects working together, thanks to roles. So you get the benefits of locality without creating a large, unwieldy object.


Perhaps the hardest problem in programming is tracking (or even reproducing) bugs in a distributed system — and we celebrate distribution and asynchrony as though they were good things. We’re idiots in doing so.

A possible example of this: Netflix "boasts over a thousand microservices." "By decomposing its monolithic application into smaller, specialized services, Netflix achieved a level of flexibility, scalability, and continuous deployment that was not possible with a monolithic architecture. Today, Netflix’s microservices architecture empowers its engineering teams to work independently, iterating quickly, and innovating at a rapid pace." (source).

I happened to already know from blog posts, etc. that Netflix is using Apollo Federation to give them a universal GraphQL API. So basically, Netflix split up their system into thousands of microservices that need to communicate over a network, only to be stitched back together again via Apollo Federation, including what I imagine must be rather complicated solutions for joining related data back together.

I don't know the details of their system, but when I learned about this my first impression was that there must be some middle ground between one giant monolith and thousands of microservices that would be a better solution.

Raoul Duke

unread,
Dec 6, 2023, 10:54:57 PM12/6/23
to object-co...@googlegroups.com
it is common for people in the ui world to start off with some
single-threaded thing with an event loop, see e.g. javascript, and
then eventually they get into adding async to their language "because
networking (at the very least)" and then they find out that their
functions now have monadic color, and it is a pain in the butt to
rework something from being sync to async, because it can easily
bubble all the way up the call stack. i think that is an indication of
the general inevitability of wanting async in many fields of software
development, and one way to deal with the coloring is to start off in
the "other" function color from the get-go. the kopi language for
example https://mike-austin.com/react-desktop/. ideally the concurrent
stuff would also strive to avoid all the usual train wrecks, so it
might push for deterministic dataflow, for example. "doctor it hurts
when i start off in a sync way of thinking then try to move to
concurrency and keep making dumb mistakes like race conditions and
deadlocks..."

Raoul Duke

unread,
Dec 6, 2023, 10:56:58 PM12/6/23
to object-co...@googlegroups.com
> In a cohesive module, the functionality is all closely related responsibilities

like an object :-)

Matthew Browne

unread,
Dec 6, 2023, 11:06:03 PM12/6/23
to object-co...@googlegroups.com
On 12/6/23 10:56 PM, Raoul Duke wrote:
>> In a cohesive module, the functionality is all closely related responsibilities
> like an object :-)

Yeah, I was deliberately using the word "module" because the concept
applies at multiple levels. A "module" could be an object, or something
bigger like a package.


Raoul Duke

unread,
Dec 6, 2023, 11:23:18 PM12/6/23
to object-co...@googlegroups.com
> >> In a cohesive module, the functionality is all closely related responsibilities
> > like an object :-)
> Yeah, I was deliberately using the word "module" because the concept
> applies at multiple levels. A "module" could be an object, or something
> bigger like a package.

i once had a cool perl thing i wrote and shipped. it was a standard
hot mess with some global variables. then i wanted to do that thing
only even more so! more than once in the program! but it was global
hell! so i ended shoving the original code into a class and making
more than one instance of it. converting spaghetti code with globals,
to macaroni code with instance fields that all the methods could, of
course, see, but nicely safely encapsulated. and an object is like a
closure cf. Qc Na and his student Anton. modules in some languages are
singletons in a way. modules in some languages (ocaml etc. i guess)
are more able to be instantiated i think. heh tho i never really
groked ocaml modules. there's such an interesting/confusing sliding
range of ways that things are "modularized" when you look across all
the languages and history. mostly it just all ends up confusing me.

and sort of make me wonder, beg the question why/what are we trying to
modularize in the first place.

James Coplien

unread,
Dec 7, 2023, 11:22:12 AM12/7/23
to object-co...@googlegroups.com


On Dec 7, 2023, at 04:34, Matthew Browne <mbro...@gmail.com> wrote:

But by increasing cohesion, the coupling between modules should go down because you're no longer depending on so many other modules—the functions you need to accomplish your goal are no longer scattered all over the codebase in different modules.

This is a widely held fallacy. Coupling and cohesion are independent metrics. For a module with a given cohesion (the degree to which there are references to symbols within that module from within that module) I can arrange for any range of coupling to other modules from zero to infinity.

For a fixed taxonomy of nodes in a subject graph that I am trying to map onto minimally connected nodes of a smaller graph (modularization), the problem is arbitrary. I can maximize cohesion and minimize coupling by 1. finding a node of the subject graph that has the smallest number of connections to the rest of the graph; 2. map that node onto one node of the smaller graph; and 3. map all the other nodes onto another node of the smaller graph. The smaller graph has two nodes, one with maximal cohesion. and the graph has provably minimal coupling.

James O Coplien

unread,
Dec 7, 2023, 11:23:17 AM12/7/23
to object-co...@googlegroups.com


On Dec 7, 2023, at 04:56, Raoul Duke <rao...@gmail.com> wrote:

In a cohesive module, the functionality is all closely related responsibilities

like an object :-)

no.

James Coplien

unread,
Dec 7, 2023, 11:24:13 AM12/7/23
to object-co...@googlegroups.com


On Dec 7, 2023, at 05:05, Matthew Browne <mbro...@gmail.com> wrote:

A "module" could be an object, or something bigger like a package.

“Module” in CS is a source language concept.

It is not about objects.

James O Coplien

unread,
Dec 7, 2023, 11:27:59 AM12/7/23
to object-co...@googlegroups.com


On Dec 7, 2023, at 05:22, Raoul Duke <rao...@gmail.com> wrote:

and sort of make me wonder, beg the question why/what are we trying to
modularize in the first place.

Yes, I think this is starting to ask the right question.

There are broader questions, important ones, like: Are modules disjoint?

There’s a fair amount of literature about this in the Design Movement; cf. Alexander’s “A City is Not a Tree.”

And the answer is: No.

That means that any nontrivial modularization will always be a subjective heuristic.

James Coplien

unread,
Dec 7, 2023, 11:37:59 AM12/7/23
to object-co...@googlegroups.com
On Dec 7, 2023, at 04:34, Matthew Browne <mbro...@gmail.com> wrote:

This also reminds me of how in DCI, Contexts are highly cohesive—you can read the whole use case in one place. 

Yes, the use case is constructed to build on only the items found within the Context.

What makes use case cognitively cohesive? They are arbitrary groupings of interacting things that simultaneously may be interacting in hundreds of other use cases. The use case is just an arbitrary slicing (bounding) of considerations and interactions in a much larger field of interactions. It is an abstraction — a reduced, partial story.

Here are Contexts with no cohesion (by Constantin’s definition) and with high coupling:

int g, h;

Context C1 {

Role R1 { g; }

Role R2 { h; }

}

Context C2 {

Role R3 { g; }

Role R4 { h; }

}

Matthew Browne

unread,
Dec 7, 2023, 9:52:04 PM12/7/23
to object-co...@googlegroups.com
On 12/7/23 11:21 AM, James Coplien wrote:

On Dec 7, 2023, at 04:34, Matthew Browne <mbro...@gmail.com> wrote:

But by increasing cohesion, the coupling between modules should go down because you're no longer depending on so many other modules—the functions you need to accomplish your goal are no longer scattered all over the codebase in different modules.

This is a widely held fallacy. Coupling and cohesion are independent metrics. For a module with a given cohesion (the degree to which there are references to symbols within that module from within that module) I can arrange for any range of coupling to other modules from zero to infinity.

So just to be clear, is the wikipedia page for cohesion incorrect when it states, "High cohesion often correlates with loose coupling, and vice versa"? They're only claiming a correlation, not a direct causal relationship that happens in all cases. In my original statement, "by increasing cohesion, the coupling between modules should go down," I should have said "tends to" instead of "should" go down. Or maybe I just misunderstood the whole relationship between cohesion and coupling? I thought I already understood the basics of coupling and cohesion, but lately I'm not so sure.

Sorry if I'm boring anyone with all this, but I think that this might still be on topic for this thread—an attempt to identify and understand some fundamental principles that could be considered part of an "OO & DCI 101" curriculum, even if some of them are more universal principles that aren't unique to OO & DCI. We've already identified messaging and of course mental models as critical concepts, and they're perhaps the most interesting and important. But I think coupling and cohesion should also be in the list. They seem much more useful for this purpose than the "four pillars" I referenced when starting this thread (some of which are outright misleading) or things like SOLID.

The original 1974 Structured Design paper [PDF available here] defines coupling as follows:

The fewer and simpler the connections between modules, the easier it is to understand each module without reference to other modules. Minimizing connections between modules also minimizes the paths along which changes and errors can propagate into other parts of the system, thus eliminating disastrous "ripple" effects, where changes in one part cause errors in another, giving rise to new errors, etc. [...]
The complexity of a system is affected not only by the number of connections but by the degree to which each connection couples (associates) two modules, making them interdependent rather than independent. Coupling is the measure of the strength of association established by a connection from one module to another. Strong coupling complicates a system since a module is harder to understand, change, or correct by itself if it is highly interrelated with other modules. Complexity can be reduced by designing systems with the weakest possible coupling between modules.

(The phrase "weakest possible coupling" seems perhaps overly strong, since as the paper itself acknowledges, calling a public interface is a rather different variety of coupling than calling "something inside" the module.)

I know that classes and OOP didn't exist in Constantine's heyday, but let's say we have two modules, each containing a single class:

Module 1:

class A { ... }

Module 2:

class B { ... }

If they're coupled, that means that one or more functions in class A calls function(s) in class B, and possibly the other way around too. This is why I was confused by your statement that cohesion is "inversely proportional to decoupling". If we translate structured design to modern class-based programming languages, then could the term "module" also be used (speaking at a different level) to mean a single method within a class? Or did you literally mean that making classes A and B more cohesive would usually result in higher coupling between module 1 and module 2? Maybe reading the rest of the paper will answer my question, but this is why I now find myself more confused about the term "coupling" and how it relates to cohesion.

I don't want to make it sound like I'm asking for tutoring, but if anyone has any comments to point me in the right direction, I will certainly appreciate it. And maybe it will be interesting for the discussion too.


James O Coplien

unread,
Dec 8, 2023, 3:15:06 PM12/8/23
to object-co...@googlegroups.com
Matt,


On Dec 8, 2023, at 03:51, Matthew Browne <mbro...@gmail.com> wrote:

The fewer and simpler the connections between modules, the easier it is to understand each module without reference to other modules.

To make a long story short, this is true, but irrelevant.

The longer story:

Coupling and cohesion should operationally aid the discovery process, e.g. for understanding how a *system* works or understanding how to isolate a bug in a running *system*. If I know that a fault occurred in module A, then I can start looking for the bug in A and its connected modules. If A has no syntactic coupling to other modules (as in Actors or Agents or microservices) I have no clue from the static architecture about where to start looking.

This is why Ackhoff is likely right: that to understand a system for the purpose of adaptability, high coupling is better.

There is kind of a yin and yang here. Modules are easy to understand in part because their parts are highly coupled: I can reason about the module as a whole because the relatiohships are explicit. (Caveat: this probably hold practically up to some upper bounds of nodes in the graph.) You can’t change your hat and suddenly say that we can reason about the system of modules because it is a loosely coupled connection of highly cohesive parts. The key to the reasoning that must underly adapatbility is an explicit articulation of the dependencies between all systems nodes.

If A is connected to B which is connected to C, and B and C are in the same module, I have difficulty reasoning about how the computation in C affects results in A (which it likely does). If A is directly coupled to C then the relationship is explicitly articulated. That dependency adds information. Higher coupling means more information. We apply the same principle at the system level that we tried to use to argue that we could understand code at the next level down, inside a module.

Cohesion is just internal coupling. Contexts provide a locus of potentially tight coupling between roles with very weak (if any) lexical coupling to the classes of role-players. So if a bug in some class instance method is contributing to a fault in role interactions, I get no help from the source code in finding the bug. So DCI adds another stipulation that we really dumb down the class methods in a way that essentially castrates their run-time coupline to role methods — mainly because it would be hopeless to understand the code if the class instance methods did anything interesting on which role methods depended. Role methods do of course depend on the results of these instance methods and finding a bug such an instance method, that contributes to a fault in a role method, is NP hard. Were there some kind of coupling, there would be at least some hope of connecting the dots to reason about the error.

So I think that coupling and cohesion are barking up the wrong tree, and may in fact lead in exactly the direction you probably want to avoid pursuing.

The reason that the claim of the 1974 paper is irrelevant is that software engineering dollars don’t go into local concerns like understanding modules, but rather in reasoning about the interaction between modules. The semantics are in the coupling. Kay understood this and it is fundamental to what he meant by messaging. Adelete Goldberg expressed it well when she said “’it’ always happens somewhere else.” Kay explicitly says that the interesting stuff doesn’t happen inside the (sic., encapsulated) objects but rather between them. He explicitly invokes the Japanese concept of ma when describing where the action is. Ma is diffcult to translate but can mean “nowhere” (the place that is no place) or "space-time" or “interval" or “between.” That’s  coupling: the thing (object) itself is about cohesion. So the interesting information is in the coupling. That’s where deep understanding is necessary, and the per-module understanding that owes to its high internal coupling (sic., cohesion) does nothing to help that.

I’ll have to take this up with Larry, but he would probably just disagree with Alan Kay. Larry and I have talked much about OO before and he has some pretty sobering insights and cautions, most of which ring true.

Matthew Browne

unread,
Dec 8, 2023, 11:03:05 PM12/8/23
to object-co...@googlegroups.com

Thanks for the explanation, that makes sense.

the per-module understanding that owes to its high internal coupling (sic., cohesion)

I think cohesion is still a useful concept, even if it's synonymous with internal coupling (the word "coupling" is overloaded enough as it is). But your framing in terms of coupling helped me understand how the concepts scale up and down (low level <-> high level).

software engineering dollars don’t go into local concerns like understanding modules, but rather in reasoning about the interaction between modules
Yes, the real challenge is understanding a whole system and how all the moving parts work together, which is why this is the more interesting part of the passage I quoted earlier:
The complexity of a system is affected not only by the number of connections but by the degree to which each connection couples (associates) two modules, making them interdependent rather than independent.

"Independent" might be too strong of a wrong word there, but this is how I understand it: lots of connections and lots of messaging might be exactly what we need for many kinds of problems we want to solve with our applications, but we don't want more connections than that or a major breakdown of encapsulation, leading to a tangled mess. This reminds me of Einstein's quote (or rather probably more of a paraphrasing), "Make things as simple as possible, but no simpler." A certain degree of coupling is needed to avoid ignoring essential complexity. And maybe some systems should follow in the footsteps of emergent behavior in nature for the same reason—honoring the essential complexity. But I think in other contexts that would simply be over-engineering, and only get in the way of understanding what's happening at run-time.
--
You received this message because you are subscribed to the Google Groups "object-composition" group.
To unsubscribe from this group and stop receiving emails from it, send an email to object-composit...@googlegroups.com.

James O Coplien

unread,
Dec 9, 2023, 5:48:54 AM12/9/23
to object-co...@googlegroups.com


On Dec 9, 2023, at 05:03, Matthew Browne <mbro...@gmail.com> wrote:

"Independent" might be too strong of a wrong word there, but this is how I understand it: lots of connections and lots of messaging might be exactly what we need for many kinds of problems we want to solve with our applications, but we don't want moreconnections than that or a major breakdown of encapsulation, leading to a tangled mess. This reminds me of Einstein's quote (or rather probably more of a paraphrasing), "Make things as simple as possible, but no simpler." A certain degree of coupling is needed to avoid ignoring essential complexity. And maybe some systems should follow in the footsteps of emergent behavior in nature for the same reason—honoring the essential complexity. But I think in other contexts that would simply be over-engineering, and only get in the way of understanding what's happening at run-time.

Even though this is all kind of hand-wavy, I tend to agree.

I think we haven’t yet raised the real key concern, which is the difference between essential and accidental coupling. Ackhoff talks about the essential and non-essential parts of a system; there is also essential coupling and accidental coupling.

If you have a given graph of symbols and symbol references, you can take your knife and carve modules out of  the space. Now we can evaluate coupling and cohesion. There exists a theoretical ideal partitioning of that graph with minimal coupling (it’s an enumerable NP-complete problem to solve, so it’s actually not *just* theoretical) under some engineering rules of thumb such as the average size of the modules. The goal is to manage that coupling (NOT to reduce it: reducing coupling is evil.) I view cohesion (which is easy for our powerful right brains to perceive) as a heuristic that can point the way to that optimal coupling, as you pointed out in an earlier mail, capitalizing on the notion that coupling and cohesion tend to trade off against each other. I don’t think there is a big prize for cohesion because small things can be understood in their own right, anyhow. The power supporting adaptability lies in coupling.

I have done a lot of (years) of work in this area in the domain of logic design. When you build a computer you have to place the integrated circuits on logic boards and then connect the logic boards together. Connections on a single board are reasonable (though there are a zillion strategies to improve the connectivity there) and connections between boards are expensive. There are scores of algorithms to manage the coupling and cohesion of logic gates, and moreso, the integrated circuits encapsulating them, to minimize the coupling between logic boards.

So the goal is to choose the right knife to carve up the space in a way that minimizes coupling between modules. Different knives create different partitionings, and each partitioning has its own degree of coupling. Choosing the wrong knife can elevate the coupling. That’s what I mean by accidental complexity: I might get high coupling as a artefact of the partitioning method I use. That coupling is not intrinsic to the system, but arises by how we choose to view the system.

The mantra of high-cohesion-low-coupling is designed to minimize this accidental coupling. It is not designed to minimize coupling. (That, of course, is software engineering  heresy, and is one of the reasons that I largely discount software engineering.) I see computer science people make this mistake again and again.

Said another way, the goal of modularization is to optimize essential coupling.

Now, all of this is pretty simplistic if you base it on lexical coupling. Other kinds of coupling matter and are in fact more common. There is extremely tight coupling in DCI between a Role and its role-player, and therefore between a Role and the class of its Role-player. It is not lexical coupling — not at all, that's the whole point — but rather technological and semantic coupling.

And now things get interesting: Perhaps modularization is the wrong formalism. Almost by definition, modules are non-overlapping. In good DCI everything overlaps everything else. An object can plan multiple roles, for example, so there is no real semantic or technological modularity. Hmm.

(As a segue here, computer science types have always been pursuing this “separation of concerns” to optimize modularity. A good deal of work was put into C++ to ensure that a base class and its derived class were modularized in that a derived class could not access the private data of its base class. This sustained the illusion that they were independent. Perry and Kaiser came along and formally proved that this independence is only an illusion. In fact, if we go back to the halting problem you can trivially prove that no part of any computer *system* can be proven to be independent of any other part of that system, except in degenerate cases.)

Anyhow, back to DCI. What makes it “work" is that we can express the intense coupling between modules in terms of highly cohesive modules (Contexts, classes, and Roles) that give the illusion of modularity by maintaining lexical decoupling. The real complexity is still there, and it arises in fora such as our discussions about what to do if an object plays multiple roles, and a method of the same name appears in more than one of these roles. The trygve language has a type system that is chock full of engineering rules that tend to block off the more egregious of these infractions, but I think the naked paradigm out-of-the-box is a bit like a Swiss army chain saw. If you’re not careful you’ll end up cutting off a major limb.

In my old age I’m finding that almost all paradigms are like that.

Example: Even the poor misguided folks who did ObjectTeams were blind to the damage they were doing and defended their approach to the death until I gave them the Dijktra algorithm homework assignment, for which DCI gave the right answer and for which ObjectTeams simply gave the wrong answer. That problem arose out of an attempt to craft a paradigm using a knive that split things up in the wrong way (it went too far by giving Roles a run-time identity that was separate from the identity of the Role-player). They, too, were caught up in the grade-school-level of understanding of coupling and cohesion, and the results were fatal. It’s easy to get misled and get stuck there.

Matthew Browne

unread,
Dec 9, 2023, 8:20:40 AM12/9/23
to object-co...@googlegroups.com
On 12/9/23 5:48 AM, James O Coplien wrote:
There is extremely tight coupling in DCI between a Role and its role-player, and therefore between a Role and the class of its Role-player. It is not lexical coupling — not at all, that's the whole point — but rather technological and semantic coupling.

What exactly do you mean by "lexical coupling"? Just literally two pieces of code that exist in the same source file or module?

And if we have MyRole.someDataMethod() in a Context, what kind of coupling is that (assuming that someDataMethod is a method in a class defined in a different file)?


James O Coplien

unread,
Dec 9, 2023, 8:49:32 AM12/9/23
to object-co...@googlegroups.com
Hey, Matt,

On Dec 9, 2023, at 14:20, Matthew Browne <mbro...@gmail.com> wrote:

What exactly do you mean by "lexical coupling"? Just literally two pieces of code that exist in the same source file or module?

Just a reference to a symbol (lexeme) that defines some entity, by virtue of the fact that that the symbol appears elsewhere. You can add some bells and whistles that avoid false aliasing by taking scope and other qualifications into account. In short, it’s coupling that you can deduce from a rather casual analysis of the source code.

And if we have MyRole.someDataMethod() in a Context, what kind of coupling is that (assuming that someDataMethod is a method in a class defined in a different file)?


That depends on the programming language.

In Smalltalk there may be none (at least, no lexical coupling: technical and semantic coupling are always possible except in the most anally type-checked languages).

In C++, it is lexical coupling (although through a daisy chain of indirections).

In trygve, it is a potential technical and behavioral coupling.

Matthew Browne

unread,
Dec 10, 2023, 11:18:32 PM12/10/23
to object-co...@googlegroups.com
On 12/9/23 5:48 AM, James O Coplien wrote:
> I think we haven’t yet raised the real key concern, which is the
> difference between essential and accidental coupling. Ackhoff talks
> about the essential and non-essential parts of a system; there is also
> essential coupling and accidental coupling.

Yes, I think this definitely the key thing as far as coupling is
concerned. And it seems to me that there is also essential and
accidental cohesion.

Matthew Browne

unread,
Dec 10, 2023, 11:22:51 PM12/10/23
to object-co...@googlegroups.com
A very interesting and important video by Alan Kay:

https://www.youtube.com/watch?v=ktPCH_p80e4

He talks about Sketchpad (in detail) which was so ahead of its time and
had a huge influence on him, how it relates to the Dynabook vision, and
so much more (that's just the tip of the iceberg).

James O Coplien

unread,
Dec 11, 2023, 7:50:03 AM12/11/23
to object-co...@googlegroups.com


On Dec 11, 2023, at 05:18, Matthew Browne <mbro...@gmail.com> wrote:

And it seems to me that there is also essential and accidental cohesion.

Fascinating concept!

Matthew Browne

unread,
Dec 11, 2023, 8:18:40 AM12/11/23
to object-co...@googlegroups.com
Well, I wonder what Larry Constantine would say...he might say that "accidental cohesion" is just misunderstanding what he meant by cohesion and that cohesion means grouping code that "essentially" belongs together into a single module (or single source code construct). But given how many people use the term "cohesion" to simply mean grouping code, even if it's a stretch to say that it's "closely related code", I think it's a useful concept regardless.

--
You received this message because you are subscribed to the Google Groups "object-composition" group.
To unsubscribe from this group and stop receiving emails from it, send an email to object-composit...@googlegroups.com.

Egon Elbre

unread,
Dec 11, 2023, 8:53:16 AM12/11/23
to object-composition
In linguistics there's a concept of "coherence", and also "local coherence", "global coherence".

The description of "essential cohesion" from that description has hints of "coherence" to it.

Raoul Duke

unread,
Dec 11, 2023, 12:50:52 PM12/11/23
to object-co...@googlegroups.com
coherence etc. - anything about design - seems to me to be very contextual. see why people tried to invent AoP. or come up with use cases. or DCI contexts. etc. so we need some day star trek tools & paradigms that let us view things from different arbitrary perspectives, anything less will always have a degree of "tyranny of the dominant paradigm" i feel. 

James O Coplien

unread,
Dec 11, 2023, 2:05:17 PM12/11/23
to object-co...@googlegroups.com


On Dec 11, 2023, at 18:50, Raoul Duke <rao...@gmail.com> wrote:

so we need some day star trek tools & paradigms that let us view things from different arbitrary perspectives, anything less will always have a degree of "tyranny of the dominant paradigm" i feel.

Perfectly expressed, IMHO.

Matthew Browne

unread,
Dec 11, 2023, 9:09:26 PM12/11/23
to object-co...@googlegroups.com
I was looking back at the Artima article from 2009. It had been so long that I had forgotten how many of the things we've been discussing here that it touches on:

For example:
We were taught that system behavior should "emerge" from the interaction of dozens, hundreds or thousands of local methods. The word of the day was: think locally, and global behavior would take care of itself. Anyone caught writing a method that looked like a procedure, or caught doing procedural decomposition, was shunned by the OO community as "not getting it."

And yet there's still something valuable there in the idea of "emergence". I think it comes down to the paradigm (or paradigms) best-suited to a particular problem, as Raoul was saying. I don't think it's a coincidence that all the most popular programming languages today (for all their faults) are multi-paradigm in one way or another.


--
You received this message because you are subscribed to the Google Groups "object-composition" group.
To unsubscribe from this group and stop receiving emails from it, send an email to object-composit...@googlegroups.com.

Quang

unread,
Dec 12, 2023, 5:59:42 PM12/12/23
to object-composition
This video makes me stop believing in OOP (around 1:18:00): https://youtu.be/QjJaFG63Hlo?si=IlHnOPOnRa348qnQ&t=4683
(Alan is in London and still active on Quora, but his answers for OOP related questions are still super vague)

Unless someone can fully explain what was going on in Alan Kay's head at that time. My understanding (very very vague) was:
- His team was doing experience and failed to find the way to implement messaging (inspired by biological cells)
- His vision was about reuse (not use case). Objects live forever on the internet, anyone can just pull them in and start to create their use cases. The vision was too big to get proper implementation.

My stupid take: if the inventor could not implement it properly, why entire industry keep using it :) (researching should be fine)

--quang


Quang

unread,
Dec 12, 2023, 6:21:07 PM12/12/23
to object-composition
I also found this talk interesting, because it goes again "current" recommendations (small classes with a lot interfaces aka loose coupling). His recommendation is: thin interface + deep implementation (strong coupling). He uses unix IO commands as example. And it looks like the usage of Unix commands is the closest practical things to DCI (each command play a role and we can compose them (pipe) to implement usecase).

Matthew Browne

unread,
Dec 16, 2023, 3:55:06 PM12/16/23
to object-co...@googlegroups.com

From reading and watching videos with Alan Kay (including the one he recorded this year I posted in a previous message in this thread), one thing that's clear is that he was interested in a lot of different things, and not only in computer science but in many disciplines including psychology, biology, theater and the arts in general. I think that's one of the reasons that he was so innovative. It also seems that he might have had multiple goals or visions that were somewhat separate or distinct, although perhaps they could still be complimentary in some sense (I'm not sure). In particular I'm thinking of (1) the Dynabook vision and using Smalltalk to help with childhood education, and (2) the biological vision / objects as cells or "mini computers," ultimately leading to emergent behavior. (Cope alluded to these two visions in his recent talk.)

So I think part of the challenge of understanding the early history of OO, its success or failure, and what it means for us today, is mistakenly believing that it was about only one thing, or that we have to either embrace everything Kay was working on or forget the whole thing. I think it would be more constructive to look at the various concepts one at a time and then try to construct a new vision that's informed by both the early history and what we know today, and updated to acknowledge the technological changes that have happened. And we also have to acknowledge that there are many different kinds of systems, and different paradigms work better in different contexts.

It's also good to remember that there was a whole community working on these ideas in the early days, not only Kay (I don't know a lot of the history, but Dan Ingalls is an example). Although he wasn't there from the very beginning, that also includes Trygve and others at the time who did a lot of research and thinking about the best ways to use these new concepts. So there's a lot to draw on from both early history and more recent experience.

Even for those who aren't in a position of teaching junior developers or university students, asking what are the real fundamental and most valuable concepts for someone learning programming today is a very useful exercise. It has certainly helped me realize some gaps in my own knowledge, and also reflect on concepts that I understand intuitively from programming for a long time but I don't think are necessarily captured well in the way that programming is commonly taught today. I'm still reflecting on everything we've discussed here. I think the goal should be a conceptual framework that new programmers (and open-minded programmers of all levels) will be able to connect with and deeply understand and improve the quality and maintainability of the systems we write. I think teaching the importance of mental models is an essential part of this. As for everything else we've discussed in this thread, including OO and messaging, there's definitely something there but it still needs to be better distilled and also contrasted with other paradigms that might be more appropriate in different situations. That's also true of DCI, which for all its value I don't see as necessarily a "universal" paradigm, mainly because I don't think there's any such thing.


James O Coplien

unread,
Dec 16, 2023, 3:58:32 PM12/16/23
to object-co...@googlegroups.com


On Dec 16, 2023, at 21:55, Matthew Browne <mbro...@gmail.com> wrote:

I think it would be more constructive to look at the various concepts one at a time and then try to construct a new vision that's informed by both the early history and what we know today, and updated to acknowledge the technological changes that have happened.

Been there, done that, bought the T-shirt. (See some of my patterns talks where this may be more obvious.)

I highly recommend it.

Caveat: it will take you a few years, or decades.

But it’s worth it.

James O Coplien

unread,
Dec 16, 2023, 3:59:58 PM12/16/23
to object-co...@googlegroups.com
On Dec 16, 2023, at 21:55, Matthew Browne <mbro...@gmail.com> wrote:

That's also true of DCI, which for all its value I don't see as necessarily a "universal" paradigm, mainly because I don't think there's any such thing.

I used to think that that should go without saying, but I was probably wrong. Now I would say that it doesn’t hurt to repeat it, and maybe even that it bears repeating.

Matthew Browne

unread,
Dec 23, 2023, 12:54:51 PM12/23/23
to object-co...@googlegroups.com

Hi Quang,
That's a very interesting video, thanks for sharing. I could definitely see some people misunderstanding and taking his advice to the extreme, but Ousterhout has some excellent points and I think the part about "deep classes" (not what it probably sounds like to most folks, or at least not what I expected) speaks to the same issues we were discussing here re: cohesion. I think many developers definitely take decoupling too far, and he does a good job of cautioning against that.

And his intro is spot on: as an industry, we are definitely lacking a set of basic principles that more than a small subset of people can agree on (based on proven success metrics) about how to design maintainable software...and (my two cents) the few principles we do seem to agree on often aren't well-understood. Sort of echoing what he said, it's surprising that this isn't viewed as a bigger issue by more industry and academic leaders. I guess "back to basics" software design stuff isn't the most sexy topic.

Matthew Browne

unread,
Dec 26, 2023, 7:24:35 AM12/26/23
to object-co...@googlegroups.com

P.S. I feel I should note that I just used the word "decoupling" in a way that I myself was just recently confused by ;)  By "taking decoupling too far," I meant splitting a system up into too many tiny modules/classes/etc. in a way that makes it difficult to follow system behavior.

Quang

unread,
Dec 26, 2023, 1:08:05 PM12/26/23
to object-composition
He is now the professor of Stanford's software design, so if his teaching goes well, it will have big impact on the industry.
Reply all
Reply to author
Forward
0 new messages