dynamic invocation proxies

7 views
Skip to first unread message

rick.va...@gmail.com

unread,
Feb 13, 2008, 9:38:07 AM2/13/08
to pococapsule
as far as i understand, these proxies are dependent on the
configuration file (e.g. setup.xml):

quote from documentation:

>> PocoCapsule uses an opposite approach. Instead of parsing component header files and generating
>> reflections for all encountered methods regardlessly, PocoCapsule instrument utility pxgenproxy
>> parses the actual application descriptions and only generates dynamic invocation proxies of newly
>> encountered IoC methods.
>> This scenario is a reverse of reflection and therefore is referred to as projection.

The documentation speaks about several "minor disadvantages" of this
method.
I would think that the need for recompiling the proxy when the
configuration file changes is a "big
disadvantage".
One of the great things of IoC is that the system can be configured by
"just" changing the
configuration file. But now users also have to recompile reflextion
proxies (?)

Maybe i didn't understand it completely...

Can someone explain this issue?
Can pxgenproxy generate reflextion code for ALL classes in the
specified .h file(s)?



thanks
Rick

Ke Jin

unread,
Feb 13, 2008, 11:23:25 PM2/13/08
to pococapsule
Hi Rick,

Many thanks for the questions and comments. Here is my comments:

First of all, with PocoCapsule, one does NOT need to recompile dynamic
proxies under configuration changes that only modify POCO invocation
parameter values. One only needs to rebuild new proxies for new POCO
invocation signatures involved in the new configuration. In my
opinion, this is not only desirable (no recompilation on parameter
value changes) but also acceptable (build new dynamic proxies for new
signatures). The assumption is that real world application
configurations should avoid applying new invocation signatures that
have never been tested before. This kind of usage scenario
automatically avoids the need of an on-the-field recompilation after a
reconfiguration. Because all dynamic proxies to be used by the
application on field should have been generated and compiled for
deploying the application during QA tests.

Secondly, other IoC containers today also offer static solutions
based on either programmatic/manual configurations (such as the
PicoContainer without the Nano) or metadata configurations (such as
the Spring Framework with Spring-Annotation). In these static
solutions, not only new PO*O signatures but even parameter value
changes would force recompilations. Although I am not keen on these
solutions (especially the recompilation under value change is highly
undesirable in my opinion), I do believe that these well accepted and
even enthusiastically pursued solutions reflect the fact that such a
recompilation does not bother real world applications.

This is not because the industry does not recoganize the "big"
disadvantage of the on-the-field recompilation required in these hot
solutions, but because that the intention of IoC framework is not like
what you thought: to avoid recompilation under configuration changes
(specifically invocation signature changes). In fact, IoC frameworks
are mainly for:

a) separating plumbing logic (component life cycle controls,
wirings, initial property settings etc.) from business logic and
supporting framework agnostic business logic components.
b) allowing user to setup (configure/deployment/etc.) an
application DECLARATIVELY (by expressing "what" it is alike, rather
than the procedure of "how" to build it step by step),
c) supporting the idea of software product lines (SPL) based on
reusable and quickly reconfigurable components and domain-specific
modeling.

Whether a IoC framework avoids on-the-field recompilation when new
signatures appeared in the declarative configuration descriptions is
merely a feature of bells-and-whilts rather than a "great thing" (or a
"big disadvantage" the other way around). In PocoCapsule, generated
dynamic proxies are very small and require neligable recompilation
time for most applications, not to mention that:

a) on-the-field recompilation can largely be avoided if component
deployments have been pre-tested (as discussed in the beginning).
b) this recompilation need even less time than packaging deployment
descriptors (such as package them into .war/.ear/.zip files).

Now, let's take a look on those "minor" disadvantages in the
suggested solution that generates proxies for all classes in specified
header files:

1) More manually code fixes: I would suggest one to play some of
relevant utilities, such as GCC XML, on various header files on
different platforms (including Windows, various unix/linux, VxWorks,
Symbain OS, etc. Because IoC frameworks do not (and should not)
prohibit user to use non-portable components, the utilities that
parsing header files have to either deal with non-portable header
files (including various platform specific C++ extensions) or require
users to fix those header files manually before parsing. In the
suggested scenario, the developers who were only going to configure
the application at high level would have to apply more low level code
fix effort.

2) Bloated code generation and heavy runtime footprint: Based on
various application examples (see http://www.pocomatic.com/cpp-examples#corba)
we compared PocoCapsule generated proxies code to CERN REFLEX that
generate all proxies of classes in specified header files. Typically,
you would see REFLEX produces 10 to 1,000 times more code than it was
actually needed for an IoC configuration. These redundent code eat
megas of runtime memory (instead of few or few tens kilos). This is
because in the suggested solution, one would have to generate proxies
for all classes that are implicitly included (declared in the other
header files that are include by specified header files), proxies for
all classes that are used as parent classes of some other classes, and
proxies that are used as parameters of class methods, etc.. Otherwise,
it would be merely 50 yards vs 100 yards, namely, one would still have
the "big" disadvantage of having to rebuild proxies after all.

3) Human involved manually edited filters: Utilities, such as GCC XML
(and therefore the CERN REFLEX), allows one to filter out unwanted
proxies to reduce the size of generated code. However, one would have
to manually edit filter configurations. The consequence of applying
such filters are more code (or script) and more complexities to be
handled and mantained manually. This immediately defeats the whole
point of using the IoC frameworks.

4) Additional footprint for a runtime type system: To support OO
polymorphism (e.g. components that extend from interfaces) without
recompilation, simply generating all proxies is not sufficient. The
solution would have to provide a runtime type system (and additional
metadata as well). This will increase the application runtime
footprint roughly by another ~1Mbytes.

5) Generic programming (GP) would be prohibited: As we know, C++
template specialization mandates recompilation. We can't have compiled
code that could be applied to all possible specializations. To ensure
no recompilation, the solution would have to accept a "minor"
disadvantage, namely prohibit the use of GP. GP is used heavily in
many PocoCapsule examples (see those corba examples that use "native"
servant implemention). It significantly shortens the learning curve of
some middlewares, such as CORBA (one no longer need to learn POA
skeletons), simplifies application code, and supports legacy
components with much less refactoring cost.

With all these "minor" disadvantages what one would gain was a "big"
advantage that helps him to shoot himself in the foot -- to deploy an
application that involves wirings that have never been tested
previously.

Hope this clarifies
Ke

rick.va...@gmail.com

unread,
Feb 14, 2008, 4:47:53 AM2/14/08
to pococapsule
Hi Ke,

many thanks for your quick and elaborate answer, it makes things much
clearer and helps in understanding the whole concept.
Getting such replies greatly improves the confidence in the project.

I fully agree that new components should be tested before they should
be used.

Our situation is more as follows:

the application under development can be seen as a "processing
pipeline" where the nodes should be "pluggable",
meaning that different implementations should be available to users.

As i understand it now, a possible "workflow" for new node-
implementations could be:

- implementation an testing the new node (class).
- generate a reflextion proxy for it using the proxygen tool.

If a user want to setup a specific pipeline configuration, he can now
define a "specific" configuration file and reference the
proxy-libraries for all the nodes that are used. No need for
recompilation.

Does it make sense to use it in this way, or do i overlook
something?

Rick
> various application examples (seehttp://www.pocomatic.com/cpp-examples#corba)

Ke Jin

unread,
Feb 15, 2008, 1:50:33 AM2/15/08
to pococapsule
Hi Rick,

You are very welcome. See my inline comment.

Regards,
Ke

On Feb 14, 1:47 am, rick.van.haa...@gmail.com wrote:
> Hi Ke,
>
> many thanks for your quick and elaborate answer, it makes things much
> clearer and helps in understanding the whole concept.
> Getting such replies greatly improves the confidence in the project.
>
> I fully agree that new components should be tested before they should
> be used.
>
> Our situation is more as follows:
>
> the application under development can be seen as a "processing
> pipeline" where the nodes should be "pluggable",
> meaning that different implementations should be available to users.
>
> As i understand it now, a possible "workflow" for new node-
> implementations could be:
>
> - implementation an testing the new node (class).
> - generate a reflextion proxy for it using the proxygen tool.
>
> If a user want to setup a specific pipeline configuration, he can now
> define a "specific" configuration file and reference the
> proxy-libraries for all the nodes that are used. No need for
> recompilation.
>

Usually the application development scenario (that is used by most
pococapsule examples) is:
- implement business logic components
- write deployment/configuration description (in xml)
- generate dynamic proxies from the above description and compile the
generated code.
- let pococapsule setup the application based on the description (and
utilize the above dynamic proxies).

An application can be reconfigured by modifying the deployment/
configuration description. If such a modification only changes
configuration parameter values, then no additional thing is needed and
one just lets PocoCapsule to setup the application again using the new
description. Otherwise, if such a modification introduces additional
IoC invocation signatures, then one needs to repeat the third step
above, namely generate/compile dynamic proxies again before
redeployment.

Another application development scenario is (illustrated by a new
example -- the pipeline example <http://www.pocomatic.com/docs/cpp-
examples/basic-ioc/pipeline>):
- implement business logic components
- write (skeleton) deployment/configuration descriptions (in xml
files). Here, "skeleton" means that the real configuration parameter
values are less important. This configuration could be used for test
deployment purpose as well.
- generate dynamic proxies from above (skeleton) descriptions.
- compile these proxies and build them together with business logic
implementations into self-contained component packages (dynamic
libraries or object files).

Then, independently, an user can setup an application from above
components as:

- write the application description
- let PocoCapsule to setup the application accordingly.

In this scenario, the on-the-field compilation of dynamic proxies is
largely avoided.

Regards,
Ke
Reply all
Reply to author
Forward
0 new messages