[Please do not mail me a copy of your followup]
OK, this is a long post that covers ideas from object-oriented design
and test-driven development.
Ian Collins <
ian-...@hotmail.com> spake the secret code
<
d0bfmq...@mid.individual.net> thusly:
>Mr Flibble wrote:
>> Dependencies on concrete classes are considered harmful and should be
>> replaced with dependencies on abstractions.
>
>By whom?
Well, I put Mr. Flibble in my KILL file when he couldn't control
himself in responding to religious flamewar threads, so I didn't see
the original post. I'm also unlikely to respond to anything else he
says on this thread, because he's staying in my KILL file.
However, it seems in this quoted statement Mr. Flibble is conflating
(confusing?) two different pieces of advice. First, there is the
dependency inversion principle (DIP) and there is the technique of
dependency injection.
"In object-oriented programming, the dependency inversion
principle refers to a specific form of decoupling software
modules. When following this principle, the conventional
dependency relationships established from high-level,
policy-setting modules to low-level, dependency modules are
inverted (i.e. reversed), thus rendering high-level modules
independent of the low-level module implementation details. The
principle states:
A. High-level modules should not depend on low-level modules. Both
should depend on abstractions.
B. Abstractions should not depend on details. Details should
depend on abstractions.
The principle inverts the way some people may think about
object-oriented design, dictating that both high- and low-level
objects must depend on the same abstraction."
<
https://en.wikipedia.org/wiki/Dependency_inversion_principle>
This is the 'D' in the mnemonic 'SOLID' object oriented design
principles. (This is related to, but not identical to, the idea of
dependency injection in order to construct an object that depends on
another object, but we will get to that in the 2nd point below.)
<
https://en.wikipedia.org/wiki/SOLID_(object-oriented_design)>
I have always interpreted this principle to be talking about "objects"
as compared to "value types". I'm using these terms in the same
sense as they are introduced in "Growing Object-Oriented Software,
Guided by Tests" by Freeman and Pryce. <
http://amzn.to/1Sjskpm>:
"Values and Objects
When designing a system, it's important to distinguish between
_values_ that model unchanging quantities or measurements, and
_objects_ that have an identity, might change state over time, and
model _computational processes_."
In a C++ program, value types are things like std::string and containers
like std::vector and so-on. The "objects" are things whose identity
matters or model a computational process. Suppose we have a class
representing a person in a payroll application. This is more
appropriately considered an object and not a value type. In that same
payroll application there may be a class called Money that is used to
represent payments between the employer and the employee. This is
more appropriately considered a value type. In C++ value types are
often called "concrete types", because they have no virtual methods
and don't derive from an interface. They aren't intended to be part
of a runtime polymorphic class hierarchy. The abstraction here is the
base polymorphic interface that all derived members of the hierarchy
implement in order to satisfy the Listkov Substitution Principle
(LSP), the 'L' in 'SOLID'.
However, C++ also supports static polymorphism through templates.
In these cases, it is often intentional that there are no virtual methods.
The template mechanisms do not preclude the use of virtual methods as
well for a combination of static and dynamic polymorphism, but this
is a fairly advanced template design pattern. Here the polymorphism
is generally obtained through template parameters and not necessarily
through inheritance. The template arguments are assumed to be models
of a Concept <
http://en.cppreference.com/w/cpp/concept>, such as a
Forward Iterator <
http://en.cppreference.com/w/cpp/concept/ForwardIterator>.
In both cases, good design following DIP says that the details should
depend on abstractions. An object collaborates with other objects
through an interface so that both can vary independently without
impacting the other. In C++, a web of collaborating objects following
DIP gets the benefit of the "compiler firewall" because the
changes to the implementations of collaborating interfaces don't cause
their collaborators to be recompiled because they literally only
depend on the abstraction of the interface. You put the interface in
one header file and the implementation's declaration in another.
In the static polymorphism case, reusing the standard algorithms
doesn't require me to change my containers so long as they and their
associated iterators model the appropriate concepts used by the
iterator arguments to the algorithms. Similarly, I can write my own
algorithms that operate seamlessly on the standard containers without
those containers having been constructed with knowledge of my
algorithms. The key mechanism that makes this work is the iterators
which decouple the containers from the algorithms and vice-versa. The
iterators are the abstraction upon which both containers and
algorithms depend in order to cooperate with each other.
>> A class should not new its
>> member objects for example but rather should use a factory passed into
>> its constructor.
>
>Why?
This is a very Java-esque way of looking at things, but the same
technique can be applied in C++. It only makes sense for "objects"
and not for "value types". There is no reason to introduce a factory
of std::string, for instance, even if I create a value type that
creates std::string dynamically. (Usually a value type will use
composition instead of the heap, but a std::string is itself an
example of a value type that uses the heap for storage.)
So why introduce a factory for some type T instead of using 'new T'?
Let's look at it from the perspective of unit testing. When I'm unit
testing, I want to mock out the collaborators of the system under test
(SUT). With dynamic polymorphism where the SUT (of type S) dynamically
creates instances of a object collaborator (of type T, implementing some
abstract interface I) dynamically, how can I sense the interaction with
the dynamically created collaborator? If the concrete type T is used
directly by S with 'new T', then I don't have a nice mechanism directly
in the language for substituting a mock of type I under the circumstances
of my test. S is really collaborating with I, the abstraction
implemented by T, but you can't new up an interface, you can only new
up an implementation of the interface, so it ends up being directly
coupled to T. Anytime the implementation of I changes, T changes and
forces S to be recompiled even though the underlying collaboration --
the interface abstraction I -- didn't change. This is because we
aren't following the DIP and we have details depending on details.
However, if S is constructed by taking a factory that creates
instances of type T, then under test circumstances I can provide a
factory that provides mock instances of I and in production
circumstances I can provide a factory that creates instances of T that
implement the abstraction I.
With static polymorphism, there isn't any need for a factory because
the type implementing the abstraction is directly supplied by the code
instantiating the template. The expression 'new T' for some template
parameter T in a template function or class isn't directly coupling
the code to some specific implementation of T. In our test code we
simply need to make sure that the expression 'new T' makes sense for
our supplied mock type. Existing mock libraries let us write sensible
expectations for template arguments, even when new instances are
created dynamically, although it can get a bit tedious. There is
probably some work that could be done in existing mock libraries to
make this easier.
So following the DIP implies that direct coupling of collaborating
object types leads you to want to introduce factory interfaces so that
instead of directly coupling the two implementations of objects you
can couple them through abstractions instead.
In a language like Java, that only supports dynamic polymorphism, that
means that you don't allow 'new' when applied to object types. It is
still allowed for value types. In a language like C++ that fully
supports both static and dynamic polymorphism, you use the appopriate
mechanism at the appropriate time. Virtual functions and dynamic
polymorphism are great for modeling abstractions and collaborating
interfaces. If you aren't dynamically creating collaborators, simply
use dependency injection for your constructors and create the objects
by passing them their collaborators. This technique is often used to
follow the DIP, but the principle and the technique are different.
<
https://en.wikipedia.org/wiki/Dependency_injection>
There is no need to use dependency injection for value types. I don't
need to sense how a string is used inside an object I'm unit testing,
I simply verify the object from the outside. For an instance of an
object type, I might ask it to do something and verify it from
externally visible properties of the object when I'm testing something
state-oriented. (If the state is only internally visible and doesn't
manifest itself in any way external to the object, then it doesn't make
sense to try and write a test for that.) If I'm testing an object
that models a computational process then I'm most likely interested in
how this object interacted with its collaborators. I can sense that
interaction through the use of mock collaborators.
When I'm testing a value type object, it doesn't have any collaborators
by definition. Everything I can do to a value type can be observed
from the outside. Imagine yourself in the (unenviable) job of writing
a set of unit tests to verify the behavior of std::string. You can
cover every member function of std::string without knowing *anything*
about how the string is internally represented. All the required
semantics can be verified from its set of public member functions.
--
"The Direct3D Graphics Pipeline" free book <
http://tinyurl.com/d3d-pipeline>
The Computer Graphics Museum <
http://computergraphicsmuseum.org>
The Terminals Wiki <
http://terminals.classiccmp.org>
Legalize Adulthood! (my blog) <
http://legalizeadulthood.wordpress.com>