Hi Rick,
Many thanks for the questions and comments. Here is my comments:
First of all, with PocoCapsule, one does NOT need to recompile dynamic
proxies under configuration changes that only modify POCO invocation
parameter values. One only needs to rebuild new proxies for new POCO
invocation signatures involved in the new configuration. In my
opinion, this is not only desirable (no recompilation on parameter
value changes) but also acceptable (build new dynamic proxies for new
signatures). The assumption is that real world application
configurations should avoid applying new invocation signatures that
have never been tested before. This kind of usage scenario
automatically avoids the need of an on-the-field recompilation after a
reconfiguration. Because all dynamic proxies to be used by the
application on field should have been generated and compiled for
deploying the application during QA tests.
Secondly, other IoC containers today also offer static solutions
based on either programmatic/manual configurations (such as the
PicoContainer without the Nano) or metadata configurations (such as
the Spring Framework with Spring-Annotation). In these static
solutions, not only new PO*O signatures but even parameter value
changes would force recompilations. Although I am not keen on these
solutions (especially the recompilation under value change is highly
undesirable in my opinion), I do believe that these well accepted and
even enthusiastically pursued solutions reflect the fact that such a
recompilation does not bother real world applications.
This is not because the industry does not recoganize the "big"
disadvantage of the on-the-field recompilation required in these hot
solutions, but because that the intention of IoC framework is not like
what you thought: to avoid recompilation under configuration changes
(specifically invocation signature changes). In fact, IoC frameworks
are mainly for:
a) separating plumbing logic (component life cycle controls,
wirings, initial property settings etc.) from business logic and
supporting framework agnostic business logic components.
b) allowing user to setup (configure/deployment/etc.) an
application DECLARATIVELY (by expressing "what" it is alike, rather
than the procedure of "how" to build it step by step),
c) supporting the idea of software product lines (SPL) based on
reusable and quickly reconfigurable components and domain-specific
modeling.
Whether a IoC framework avoids on-the-field recompilation when new
signatures appeared in the declarative configuration descriptions is
merely a feature of bells-and-whilts rather than a "great thing" (or a
"big disadvantage" the other way around). In PocoCapsule, generated
dynamic proxies are very small and require neligable recompilation
time for most applications, not to mention that:
a) on-the-field recompilation can largely be avoided if component
deployments have been pre-tested (as discussed in the beginning).
b) this recompilation need even less time than packaging deployment
descriptors (such as package them into .war/.ear/.zip files).
Now, let's take a look on those "minor" disadvantages in the
suggested solution that generates proxies for all classes in specified
header files:
1) More manually code fixes: I would suggest one to play some of
relevant utilities, such as GCC XML, on various header files on
different platforms (including Windows, various unix/linux, VxWorks,
Symbain OS, etc. Because IoC frameworks do not (and should not)
prohibit user to use non-portable components, the utilities that
parsing header files have to either deal with non-portable header
files (including various platform specific C++ extensions) or require
users to fix those header files manually before parsing. In the
suggested scenario, the developers who were only going to configure
the application at high level would have to apply more low level code
fix effort.
2) Bloated code generation and heavy runtime footprint: Based on
various application examples (see
http://www.pocomatic.com/cpp-examples#corba)
we compared PocoCapsule generated proxies code to CERN REFLEX that
generate all proxies of classes in specified header files. Typically,
you would see REFLEX produces 10 to 1,000 times more code than it was
actually needed for an IoC configuration. These redundent code eat
megas of runtime memory (instead of few or few tens kilos). This is
because in the suggested solution, one would have to generate proxies
for all classes that are implicitly included (declared in the other
header files that are include by specified header files), proxies for
all classes that are used as parent classes of some other classes, and
proxies that are used as parameters of class methods, etc.. Otherwise,
it would be merely 50 yards vs 100 yards, namely, one would still have
the "big" disadvantage of having to rebuild proxies after all.
3) Human involved manually edited filters: Utilities, such as GCC XML
(and therefore the CERN REFLEX), allows one to filter out unwanted
proxies to reduce the size of generated code. However, one would have
to manually edit filter configurations. The consequence of applying
such filters are more code (or script) and more complexities to be
handled and mantained manually. This immediately defeats the whole
point of using the IoC frameworks.
4) Additional footprint for a runtime type system: To support OO
polymorphism (e.g. components that extend from interfaces) without
recompilation, simply generating all proxies is not sufficient. The
solution would have to provide a runtime type system (and additional
metadata as well). This will increase the application runtime
footprint roughly by another ~1Mbytes.
5) Generic programming (GP) would be prohibited: As we know, C++
template specialization mandates recompilation. We can't have compiled
code that could be applied to all possible specializations. To ensure
no recompilation, the solution would have to accept a "minor"
disadvantage, namely prohibit the use of GP. GP is used heavily in
many PocoCapsule examples (see those corba examples that use "native"
servant implemention). It significantly shortens the learning curve of
some middlewares, such as CORBA (one no longer need to learn POA
skeletons), simplifies application code, and supports legacy
components with much less refactoring cost.
With all these "minor" disadvantages what one would gain was a "big"
advantage that helps him to shoot himself in the foot -- to deploy an
application that involves wirings that have never been tested
previously.
Hope this clarifies
Ke