I don't understand the sentence:
"but the usual rule concerning RTTI is more or less the same as with
goto statements" can anyone explain it?
Furthermore, can we talk about the missues of RTTI?
Regards,
OuFeRRaT,
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
The usual rule concerning goto is "don't use it unless you _know_ it's
the most clear way to achieve what you're trying to do here". This
phrase you've cited suggests that the same is applicable to RTTI.
> Furthermore, can we talk about the missues of RTTI?
Well, the most typical anti-pattern is using a series of if-else with
typeid in conditionals:
class object {...};
class foo : public object {...};
class bar : public object {...};
...
void do_something(object* obj)
{
if (typeid(*obj) == typeid(foo))
{
foo* foo = static_cast<foo*>(obj);
// do foo-specific processing here
...
}
else if (typeid(*obj) == typeid(bar))
{
bar* bar = static_cast<bar*>(obj);
// do bar-specific processing here
...
}
}
The usual advice in some cases is to use polymorphism - virtual
functions (and, if needed, abstract base classes) instead.
> I don't understand the sentence:
> "but the usual rule concerning RTTI is more or less the same as with
> goto statements" can anyone explain it?
Just like you should avoid using goto statements in favour of regular
loops, you should avoid using RTTI in favor of other language features
that can do the same.
Anyway, RTTI scarcely has uses. The visitor pattern, for example, is a
much better approach when you want to check the type of a variable
over multiple cases.
Goto is a low-level control, and most possible applications of it are
better represented with higher level constructs such as for, while,
switch etc. Writing shorter functions helps.
The author is saying that RTTI (presumably meaning typeof and
dynamic_cast<>) is also low-level, and most possible applications of it
are better represented with higher level constructions such as virtual
functions, interfaces/multiple inheritance, the Visitor type-discovery
pattern, and so forth. Sometimes we can use templates or other mechanisms
to avoid losing track of the exact type in the first place, making RTTI
redundant.
I broadly agree. However, one difference is that goto is a local
construct, within a single function, and we can rewrite the function to
not use it without affecting anything else. Where-as getting rid of RTTI
often means revising the architecture across many different classes.
Getting it right first time often requires careful design and foresight.
If some of the code is not under our control, then the ideal design may
not be possible, and even if all the code is under our control, an
RTTI-less design may mean more coupling, or rather more lack of locality,
than we want. Eg we may not want to inflict the burden of supporting a
particular virtual function, or the Visitor pattern, on every class in a
hierarchy.
So this local versus non-local issue makes it not a perfect analogy, and
in practice I find I use RTTI far more often than goto. However, the
author's point remains. Goto and RTTI are not the tools to keep at the
very top of your toolbox. Try the alternatives first.
-- Dave Harris, Nottingham, UK.
Historically C++ did not have RTTI and, IIRC Bjarne Stroustrup was
originally against having it. However it was introduced along with
dynamic_cast in response to the fact that developers (skilled ones) of a
good number of libraries were synthesising the functionality of RTTI
manually (having hierarchies in which every class had a member that
identified the class. That
a) Shows that the functionality was useful
b) Was fragile (easy for someone adding a class to the hierarchy to
forget to add the required member
It was at that point (around 1994) that RTTI was added.
>
> Furthermore, can we talk about the missues of RTTI?
Yes, and I am sure that like quite a number of other specialist features
of C++ it is abused by those who do not understand the alternatives
(another example of this is exceptions)
Goto statements are evil. This rule looks harsh, but it pretty much
boils down to this short phrase, following a 1968 analysis by E. W.
Dijkstra. The expanded view is:
* Goto is redundant. You can reshape every "goto" construction to a
"while". However, this needs non-automatable, manual work.
* A subset of software programs can be proved to be correct by
providing loop invariants and convergence properties. This is a
tedious, manual proofing and is done for scientific purposes only or
as part of military grade software certification. Goto statements,
especially to labels in backward direction completely spoil any formal
correctness proof attempt.
* Program code using a lot of goto statements is called "spaghetti
code", because the lines of execution intertwine like italy pasta on a
plate. While the pasta is delicious, this can make code very difficult
to understand.
In limited situations, goto statements can improve readability, make
more compact or more performant code, e.g. when exiting nested loops
(I'd do that with an exception, though). I leave this kind of
optimization to swashbuckler developers with 20 yrs+ experience (they
know when "evil" is "good") and never teach it to beginners (will
produce unusable code predictably).
> Furthermore, can we talk about the missues of RTTI?
Usually, RTTI is not needed, e.g.:
class A {}; class B : A {};
A* a = new B();
// DONT
B* b;
if ( typeid( *a ) == typeid( B ) ) {
b = a;
}
else {
std::cout << "oops, wrong type!\n";
}
...
// BETTER
B* b = dynamic_cast<B*>( a );
if ( b == 0 ) {
std::cout << "oops, wrong type!\n";
}
...
This is of course no complete argument, but maybe a reference basis?
best,
Michael.
It's not only that you might need 20 years experience to make the
judgment call, it probably also takes at least 20 years to find the
one place where a goto is the best solution.
It's that rare!
Bo Persson
> MiB wrote:
> >
> > In limited situations, goto statements can improve readability, make
> > more compact or more performant code, e.g. when exiting nested loops
> > (I'd do that with an exception, though). I leave this kind of
> > optimization to swashbuckler developers with 20 yrs+ experience
> > (they know when "evil" is "good") and never teach it to beginners
> > (will produce unusable code predictably).
> >
>
> It's not only that you might need 20 years experience to make the
> judgment call, it probably also takes at least 20 years to find the
> one place where a goto is the best solution.
>
> It's that rare!
>
off the cuff code generated by programs, and code translating code
from a different language [porting] come to mind and don't require 20
years to figure out the best way to initially port working 'spagetti'
code is to start with a working 'spagetti' version that emulates the
original language,
You could also argue that it is the worst way to port spaghetti-code,
since the probability of anyone ever untangling it is significantly
smaller if the code works. :-)
--
Erik Wikström
and it failed horribly to provide what was needed
what was needed was enough introspection to automate serialisation
i.e.
- a way to automatically get an identifier for an object
that _uniquely_ identified it's type across runtimes
- a means of enumerating
and iterating
over the member variables and up the inheritance hierarchy
basically the structure needed to fill a factory
and prevent duplication of effort
(and possible divergence errors)
neither of these objectives were fulfilled
there is never a reason for dynamic_cast in properly architected code
(though dealing with libraries with poor architecture
sometimes makes it necessary)
i'm still not positive any of this will be possible in c++0X
it's really quite sad
since serialisation frameworks are error prone without it
and usually require code generation to make up for c++'s lacks
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
galathaea: prankster, fablist, magician, liar
>> off the cuff code generated by programs, and code translating code from
>> a different language [porting] come to mind and don't require 20 years
>> to figure out the best way to initially port working 'spagetti' code is
>> to start with a working 'spagetti' version that emulates the original
>> language,
>
> You could also argue that it is the worst way to port spaghetti-code,
> since the probability of anyone ever untangling it is significantly
> smaller if the code works. :-)
How you argue here probably depends whether you're getting paid for the
porting effort or not (or even are paying for it)... :)
I find that controlled refactoring after having something working is
usually more efficient than trying to port and refactor and redesign
all at
once.
Also, if it works and doesn't need maintenance, there's not really any
need
to do anything to the code. Go write something that doesn't exist yet
instead :)
Gerhard
--
>
> Furthermore, can we talk about the missues of RTTI?
>
To experts responding to this question: would you consider this
direct use of typeid operator misuse of RTTI:
http://www.gamedev.net/reference/programming/features/effeventcpp/
I'm not a C++ expert and I would really like to know if this
technique would have any real life usage.
I don't buy this. C++ does have a terrible lack of introspective
ability, no argument. But RTTI/dynamic_cast<> is useful for other
unrelated uses. Introspection may have been what you wanted, and you
may have looked to RTTI to provide it, but you can't blame these
features for C++'s lack of introspection just because they're the
closest thing to what you want. templates - due to various side-
effects of rules about their use - provide half-baked introspective
capabilities too, but similarly have their separate merits.
For example, I deal with a tree of GUI gadgets. I have a registry
from identifiers to the gadget base class. If a scripted statement
asks for an behavioural attribute to be changed, I can dynamic_cast<>
from the base class to the derived class that embodies that attribute,
and perform the appropriate action or query. I have no desire to
pollute the base class with a fat interface aggregating all the tree's
attributes. So, tell me what's wrong with this model...?
Tony
> I don't buy this. C++ does have a terrible lack of introspective
> ability, no argument.
I'll have to disagree.
You can check at compile-time if an object has a member function with
a given signature and the like. That's already quite a feat that most
other languages are not capable of.
C++0x concepts even make that easier, while fixing its limits (it's
not possible to check constructors and some operators, such as
operator=, without them)
No.
RTTI is not introspection.
RTTI, as the name says, is just a way to identify types at runtime.
Bundling it in the language allows to reuse the pointer to the vtable
of polymorphic objects as identifiers.
The only way it fails, in my opinion, is that typeid(SomeType) is not
a constant expression, preventing usage in switch/cases.
> i.e.
> - a way to automatically get an identifier for an object
> that _uniquely_ identified it's type across runtimes
It does identify it uniquely.
The fact that it may not work across different implementations is
irrelevant.
> - a means of enumerating
> and iterating
> over the member variables and up the inheritance hierarchy
That was *never* the goal of RTTI.
That kind of thing is provided at compile-time (much more useful than
at runtime) by tuples, and you can trivially add it to your own types
with Boost.MPL for example.
That kind of facility is only useful in generic programming. Maybe
> neither of these objectives were fulfilled
That's good, since these were *not* the objectives.
I'm happy RTTI wasn't even more bloated by useless runtime
introspection data.
> i'm still not positive any of this will be possible in c++0X
Iterating over the member variables of any class type won't be in C+
+0x, no.
It wouldn't be too complicated, but first you'd probably have to add
Boost.MPL or better in the standard library.
Ideally you could also have support of typedefs, member variables
(overloads, templates...) etc. With good introspection you could
easily reimplement concepts, and more.
On May 14, 11:55 am, Mathias Gaunard <loufo...@gmail.com> wrote:
> On 14 mai, 05:12, galathaea <galath...@gmail.com> wrote:
>> On May 10, 5:14 am, Francis Glassborow
>
>>> It was at that point (around 1994) that RTTI was added.
>
>> and it failed horribly to provide what was needed
>
>> what was needed was enough introspection to automate serialisation
>
> No.
> RTTI is not introspection.
> RTTI, as the name says, is just a way to identify types at runtime.
> Bundling it in the language allows to reuse the pointer to the vtable
> of polymorphic objects as identifiers.
>
> The only way it fails, in my opinion, is that typeid(SomeType) is not
> a constant expression, preventing usage in switch/cases.
that sounds like a recipe for bad code
>> i.e.
>> - a way to automatically get an identifier for an object
>> that _uniquely_ identified it's type across runtimes
>
> It does identify it uniquely.
type_info::name is not guaranteed to be unique
the type_info address itself is not guaranteed
(obviously)
to be the same across runtimes
there have been plenty of discussions about this over the years
> The fact that it may not work across different implementations is
> irrelevant.
>
>> - a means of enumerating
>> and iterating
>> over the member variables and up the inheritance hierarchy
>
> That was *never* the goal of RTTI.
> That kind of thing is provided at compile-time (much more useful than
> at runtime) by tuples, and you can trivially add it to your own types
> with Boost.MPL for example.
>
> That kind of facility is only useful in generic programming. Maybe
it absolutely should be determined translation time
that is why i mentioned code generation
unfortunately
the things being serialised are not always pure data objects
so boost's tuples are not a replacement for this generic need
this facility is needed in any code
- that communicates via some protocol to other machines
(client-server systems)
- that requires data persistence
(saves files)
generally
any IO that takes objects outside the c++ runtime
or brings them in
requires serialisation
that is a broad category
>> neither of these objectives were fulfilled
>
> That's good, since these were *not* the objectives.
> I'm happy RTTI wasn't even more bloated by useless runtime
> introspection data.
they were the only valid needs presented to the c++ community
when RTTI was proposed
there were other needs that got mixed in there
and generally confused the committee and the final result
but those were the valid needs
>> i'm still not positive any of this will be possible in c++0X
>
> Iterating over the member variables of any class type won't be in C+
> +0x, no.
> It wouldn't be too complicated, but first you'd probably have to add
> Boost.MPL or better in the standard library.
>
> Ideally you could also have support of typedefs, member variables
> (overloads, templates...) etc. With good introspection you could
> easily reimplement concepts, and more.
i've been looking at the concept facility for some time
and i still don't see how this becomes automatable
the thing that needs to be automated is the serialisation routine
something like
virtual void serialise(connection & stream)
{
Super::serialise(stream);
stream & member1_;
stream & member2_;
// ...
}
if the programmers have to do this by hand
they can (do) make mistakes
(like leave out members
forget to call base
change the class state without updating serialise
..)
also somewhere serialisation needs a unique id
to preface the layout for reading back in
if this is possible in standard c++
(or even c++0X)
then i would be very interested in seeing how
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
galathaea: prankster, fablist, magician, liar
--
On May 14, 7:38 am, Tony Delroy <tony_in_da...@yahoo.co.uk> wrote:
> On May 14, 12:12 pm, galathaea <galath...@gmail.com> wrote:
>
>
>
>> On May 10, 5:14 am, Francis Glassborow
>> <francis.glassbo...@btinternet.com> wrote:
>>> [RTTI] was introduced along with
>>> dynamic_cast in response to the fact that developers (skilled
>>> ones) of a
>>> good number of libraries were synthesising the functionality of RTTI
>>> manually...
>
>> and it failed horribly to provide what was needed
>
>> what was needed was enough introspection to automate serialisation
>> i.e.
>> - a way to automatically get an identifier for an object
>> that _uniquely_ identified it's type across runtimes
>> - a means of enumerating
>> and iterating
>> over the member variables and up the inheritance hierarchy
>
>> basically the structure needed to fill a factory
>> and prevent duplication of effort
>> (and possible divergence errors)
>
>> neither of these objectives were fulfilled
>
>> there is never a reason for dynamic_cast in properly architected code
>> (though dealing with libraries with poor architecture
>> sometimes makes it necessary)
>
>> i'm still not positive any of this will be possible in c++0X
>
>> it's really quite sad
>> since serialisation frameworks are error prone without it
>> and usually require code generation to make up for c++'s lacks
>
> I don't buy this. C++ does have a terrible lack of introspective
> ability, no argument. But RTTI/dynamic_cast<> is useful for other
> unrelated uses. Introspection may have been what you wanted, and you
> may have looked to RTTI to provide it, but you can't blame these
> features for C++'s lack of introspection just because they're the
> closest thing to what you want. templates - due to various side-
> effects of rules about their use - provide half-baked introspective
> capabilities too, but similarly have their separate merits.
>
> For example, I deal with a tree of GUI gadgets. I have a registry
> from identifiers to the gadget base class. If a scripted statement
> asks for an behavioural attribute to be changed, I can dynamic_cast<>
> from the base class to the derived class that embodies that attribute,
> and perform the appropriate action or query. I have no desire to
> pollute the base class with a fat interface aggregating all the tree's
> attributes. So, tell me what's wrong with this model...?
probably the biggest thing wrong with that model
is that it fails the open-closed principle
you need to update this "using" class
every time you update or add gadgets
meaning those methods will always stay open
with all the knowledge of the gadget types
growing longer
and more complex
as time goes on
those methods tend to be sources of breakage in projects
and generally
as good architectural principles like open-closed dictate
should never be used
instead
since there is only one point of variation here
you don't "bloat" the interface with many different methods
(how would you know which to call?)
you add one virtual method
something like
virtual void changeAttribute(Variants const& data);
now your code can always call this
when the script parses an attribute change
and the parser code
(or whoever is dealing with this)
can be properly closed
dynamic_cast is always a poorer option than
- registered callbacks
- virtual methods
and _always_ a sign of bad architecture
this is a mathematical statement
which can be fully rigorised
in category theoretical models of architecture
Interesting use of line breaks.... anyway...
If I have a collection of widgets, of many different types, and one of
my types grows a new method to deal with some new facility, I have a
number of ways to handle it.
One way is RTTI; if my car is a hybrid, don't panic when the motor
stops in traffic.
Another way is a fat interface; when the engine stalls, call the
motor_has_stalled method. This of course leads to having a
motor_has_stalled method on a wheelbarrow. And every other means of
transport.
Which could be quite an update task, much bigger than changing one
interface and one calling method.
Neither of these mechanisms is pretty. I think you are into "religious
wars" if you try to say that either technique is always wrong.
Me? I use dynamic_cast, when it's convenient. Which isn't that often,
but isn't never.
Andy
> > The only way it fails, in my opinion, is that typeid(SomeType) is not
> > a constant expression, preventing usage in switch/cases.
>
> that sounds like a recipe for bad code
That allows a fairly nice visitation system, actually. More efficient
than virtual calls and more generic.
> type_info::name is not guaranteed to be unique
>
> the type_info address itself is not guaranteed
> (obviously)
> to be the same across runtimes
You need that std::type_info::name from different processes
(eventually compiled with different compilers) produce the same result
for a given type or something?
That sounds a bit weird to mandate in my opinion. It's not because the
two C++ applications have types with the same symbol name for example
that they really are the same.
> it absolutely should be determined translation time
>
> that is why i mentioned code generation
Translation time?
You mean only when generating the machine code?
Then it's bloat with very limited use, condemned to generate fairly
inefficient code.
Funnily enough, I'm pretty sure that thing already exists in DWARF.
By the way, if you have it at compile-time, you can have it at runtime
too, of course. It's trivial to copy the data into actual variables.
> unfortunately
> the things being serialised are not always pure data objects
> so boost's tuples are not a replacement for this generic need
If they're not pure data objects maybe you should have to define the
way it's to be serialized in the first place.
> this facility is needed in any code
> - that communicates via some protocol to other machines
> (client-server systems)
> - that requires data persistence
> (saves files)
Let's not exaggerate.
I do not need a program to automatically guess the possibly wrong
automatic way to serialize some data to write and read some data.
While I agree it can be useful, it is not needed in any code. I'd
rather not use something automatic personally, at least for files (I'd
rather use standard formats) and networking (I'd rather use standard
formats and protocols).
> they were the only valid needs presented to the c++ community
> when RTTI was proposed
I honestly wasn't following the evolution of the language back then,
but I somehow doubt it.
Can you provide references to this?
> i've been looking at the concept facility for some time
> and i still don't see how this becomes automatable
Concepts do not allow that. It's just that a powerful enough compile-
time introspection mechanism could even allow to reimplement concepts.
what a strange dichotomy
dynamic cast
or fat interface?
the coding task you appear to be describing
seems to be an event dispatch to objects
but also seems to be mixing ownership
so it is unclear what functionality is desired
however
if you have some event generated somewhere in your system
and you need to have this transported to the appropriate handler
this should be done through a registration interface
everyone who wants an event registers for it
whenever an event is generated
it is dispatched to all registered handlers
then
event generation can be completely ignorant of destination
and new classes can be added without ever modifying dispatch
where is the fat interface?
this is called decoupling
it is not a religious principle
it is an architectural principle
every use of dynamic_cast
_always_adds_unnecessary_local_coupling_
_always_
(notice how i mentioned registered callbacks
in the post you are responding to)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
galathaea: prankster, fablist, magician, liar
--
>
> unfortunately
> the things being serialised are not always pure data objects
> so boost's tuples are not a replacement for this generic need
>
pure data what else is there? If you mean POD then boost::fusion can
extend this, not sure about details of handling class that inherit
classes with private data and polymorphism but if I recall template
type deduction does not convert to base class or backward then it
seems do able. I shall assume that all classes in the hierarchy are
modifiable in at least providing friendship to a struct with a single
static function.
template <class T>
struct actual_write
{
static void exec(std::ostream &os, const T &x)
{ x.save(os);}
};
specialize for types that do not have a void save(std::ostream &);
method, such as built in types, std classes etc. Only problem remains
if we have
class cast_in_stone
{
int x;
int y;
public:
cast_in_stone(int,int);
// member functions
};
we have a problem. If cast_in_stone can provide friendship to
actual_write<cast_in_stone> then it is possible I think. since
we can use create a specialization that accesses the private data
of its argument.
At least its food for thought and can traverse data. plain or not.
Yes, but you can't - for example - iterate over the available
members. You can't say "let me generate some support code for all the
enums in the translation unit, based on their symbol/value pairs".
I'm not comparing C++ with say C, but talking in absolute terms....
Regards,
Tony
It may not be useful to provide an extra layer of abstraction, as it
comes with the burden of extra indirection, complexity, code to
support, additional event types or registration function identifiers
to dream up etc.. In my case, with an application in active
development that uses a GUI library (owned by another area,
unmaintained for 5+ years, no significant API changes for 10+ years) -
it's simply a waste of time. I can couple to the GUI classes knowing
they're stable.
Similarly, if in a particular project the developers are deliberately
using derivation with a view to support dynamic_casting for these
kinds of purposes, the coupling may be a lesser evil than the extra
framework you advocate.
Summarily, keep it simple and direct if doing so works well. Not
every program is a 5000-source-file 20-team 3-company monstrosity
needing abstraction at every level. In the same way that insisting
classes with accessor functions must always be used in preference to
structs, this kind of thing can just be taken too far....
Tony
no translation time visitation generation system
should need or require any switch blocks in the code
optimally
the code call
(the place that uses the visitor)
should resolve to a single indirect call
just as if it were calling a member virtual function
there should be no need for a direct jump
to the function that the switch lives in
which will then indirect call through a jump table
> > type_info::name is not guaranteed to be unique
>
> > the type_info address itself is not guaranteed
> > (obviously)
> > to be the same across runtimes
>
> You need that std::type_info::name from different processes
> (eventually compiled with different compilers) produce the same result
> for a given type or something?
no
one program
run today
has to be able to read files
created from the same program
translated only once
when it was running weeks ago
maybe even with the power going off
like the save files in computer games
or the document files of some word processor
or basically any use of structured files
or one program
translated only once
communicating peer-to-peer
with the same program
running on a different computer
different runtimes does not equate to different translations
> That sounds a bit weird to mandate in my opinion. It's not because the
> two C++ applications have types with the same symbol name for example
> that they really are the same.
in one translation
all you need
is a way to get a unique identifier for a type
type -> id
unique
and serialisation can automate
this that is even still today done manually with the
virtual std::string id()
idiom
..
because c++'s RTTI had failed in requiring uniqueness
(the linker could fill in a table at linktime
it has enough information available to it)
> > it absolutely should be determined translation time
>
> > that is why i mentioned code generation
>
> Translation time?
> You mean only when generating the machine code?
> Then it's bloat with very limited use, condemned to generate fairly
> inefficient code.
translation time
is the state-sequence (time)
from the textual code input by the engineers
to the final production of executable
it is the time from the first preprocessing stages
to the last linking phase
in the c++ translation model
only one stage
(2 if you want to argue)
is actually compilation
there digram/trigram substitution phases
and comment/whitespace phases
and all sorts of other stuff going on
if you tack on a code generator
or a code beautifier
or a flex/bison generation phase
..
to your build process
that is part of translation time
since we are actually talking about the entire semantic construction
we must realise architectural transformations
occur throughout translation
(metaprogramming along the entire translation toolchain)
[..]
> > unfortunately
> > the things being serialised are not always pure data objects
> > so boost's tuples are not a replacement for this generic need
>
> If they're not pure data objects maybe you should have to define the
> way it's to be serialized in the first place.
the end point of all metaprogramming
is rich, dense syntax
suited for thinking about problems in their proper domain
that provides deep semantic error checking
one of metaprogramming's main goals
is to reduce duplicating information
by providing the means for compressed specification
the metaprogramming of serialisation has a long history
> > this facility is needed in any code
> > - that communicates via some protocol to other machines
> > (client-server systems)
> > - that requires data persistence
> > (saves files)
>
> Let's not exaggerate.
> I do not need a program to automatically guess the possibly wrong
> automatic way to serialize some data to write and read some data.
automation does not mean
"i will serialise everymember of every type
in format A79-1994"
it means providing a syntax to easily annotate
instead of forcing one to duplicate
which members to serialise
which format to output in
these are all things that should be easy to annotate
but all of the machinery to connect up the system
the implementation achieving the annotated desires
the part the runtime does stuff
and which can err for reasons other than specification
that stuff should be generated
a reusable serialisation framework
> While I agree it can be useful, it is not needed in any code. I'd
> rather not use something automatic personally, at least for files (I'd
> rather use standard formats) and networking (I'd rather use standard
> formats and protocols).
identifying data objects in typeless IO
is serialisation
providing a specification for files or protocols
is what i am talking about
i don't think you are arguing that duplication is a good thing
that reusing a subsystem
is to be deprecated over handrolling every time
but you do seem unconvinced
that a particular way to prevent duplication
is of any use
> > they were the only valid needs presented to the c++ community
> > when RTTI was proposed
>
> I honestly wasn't following the evolution of the language back then,
> but I somehow doubt it.
> Can you provide references to this?
the classic would be stroustrup and lenkov's
"run time type identification for c++"
where the case is laid for it's inclusion in the language
you will see that many of the misuses are also detailed
and that the reasons that are "valid"
all include serialising in/out of the type system
and you will notice that the biggest use of dynamic_cast
is to work around constrained architectures
where the appropriate structure is not allowed
i have it photocopied in my papers
but i cannot find it online
however
i did find
http://gee.cs.oswego.edu/dl/rtti/rtti.html
which was a supplement to the original
giving more code snippets and examples
i am sure you will notice that for most of the reasons
RTTI was not the best approach
(some should be translation time)
serialisation is certainly mentioned
(though this article shows the author
like most who advocated dynamic_cast
violating the open-closed principle repeatedly)
my favorite is
"RTTI can postpone necessary refactorings"
all of this has quite a history in the community
> > i've been looking at the concept facility for some time
> > and i still don't see how this becomes automatable
>
> Concepts do not allow that. It's just that a powerful enough compile-
> time introspection mechanism could even allow to reimplement concepts.
yes
the committee will be doing a great disservice
if they let concepts in
over something like metacode
if they think both might work for syntactical brevity
that's one direction to go
but once you have metacode
you wouldn't need concepts
and if they leave out metacode
they've made another RTTI-scale error
that will bite them for the next 15+ years
(maybe geeks should stand outside their next meeting
holding placards that read
"don't forget serialisation... again")
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
galathaea: prankster, fablist, magician, liar
--
Granted, it is possible to use C++ templates to perform rudimentary
class introspection - but the fact that this capability exists at all
in C++ owes far more to the ingenuity of template programmers than it
does to any conscious design of the language.
After all, no one in their right mind - setting out to design basic
introspection for C++ classes - would have come up with anything even
approaching the convoluted complexity that C++ templates require
today. A C++ programmer pretty much has to torture information about a
class's methods from a class immplementation. The heavy reliance on
multiple levels of indirection and obscure SPINAE like "tricks" makes
seemingly simple tasks with templates require writing code of almost
an absurd degree of complexity. In other words, C++ templates are not
well-suited (to say the least) for many of the tasks for which they
are currently used. Yet, as long as C++ offers no better solution, a C+
+ programmer who wishes to perform these tasks has no choice but to
master this complexity.
> C++0x concepts even make that easier, while fixing its limits (it's
> not possible to check constructors and some operators, such as
> operator=, without them)
Sure, concepts provide incremental improvement to C++ templates.
Nonetheless, the lack of true introspection is a disappointment.
Ironically, the strongest case to be made for introspection is the
same case that was made for templates originally. When template
support was added to C++, no could have foreseen all of the ways in
which templates would eventually be used in C++. In other words, full
appreciation of the value of templates came only after their addition
to the language. Introspection would likely follow suit. I imagine
that the full capabilities and value of introspection support in C++
has not yet been discovered and - absent support for introspection in C
++ - probably never will be discovered.
Greg
> On May 10, 5:13 am, OuFeRRaT <oufer...@gmail.com> wrote:
>
>>
>> Furthermore, can we talk about the missues of RTTI?
>>
>
> To experts responding to this question: would you consider this
> direct use of typeid operator misuse of RTTI:
I'm no expert but I had to deal with this problem many times (linear
dynamic_cast/typeid check vs virtual function call).
> http://www.gamedev.net/reference/programming/features/effeventcpp/
>
> I'm not a C++ expert and I would really like to know if this
> technique would have any real life usage.
There are 2 C++ techniques there, which one do you mean?
1. linear dynamic_cast check on each possible type:
pros: non intrusive (does not require anything from the possible types, no
special interface required)
cons: slow (for many types to check especially), error prone and hard to
maintain (that is, you add a new event type, you are not automatically
forced to add handling to it, the program compiles without handling it
which is error prone)
2. logarithmic search for typeid results with some registration framework
pros: non intrusive; it should be faster than a dynamic_cast, even
that it's
still using RTII (because in modern implementations typeid is
equivalent to
a virtual function call while dynamic_cast might need to do more
processing
because it needs to walk an inheritance path, tho when using dynamic_cast
on the leaf types it should probabil work faster)
cons: might be wrong but I still think it is hard to maintain and error
prone because the program compiles when adding a new event and does not
force you to handle it
For such a problem, when all possible types are known at compile time you
could use something like boost::variant and it's static visitor
aproach for
handling differently each possible event type. Or in general a Visitor
pattern should help here I think.
--
Dizzy
> In limited situations, goto statements can improve readability, make
> more compact or more performant code, e.g. when exiting nested loops
> (I'd do that with an exception, though). I leave this kind of
> optimization to swashbuckler developers with 20 yrs+ experience (they
> know when "evil" is "good") and never teach it to beginners (will
> produce unusable code predictably).
One good (IMO) example of goto in C code (because in C++ it could be
better
written using exceptions and RAII) is like this (commonly used throughout
the linux kernel):
int func() {
resource_t v1 = aquire();
if (!v1) goto err_v1;
// various code
resource_t v2 = aquire();
if (!v2) goto err_v2;
// other code
resource_t v3 = aquire();
if (!v3) goto err_v3;
// yet another code
release(v3);
release(v2);
release(v1);
return 0; // no error
err_v3:
release(v2);
err_v2:
release(v1);
err_v1:
return -1;
}
--
Dizzy
> no translation time visitation generation system
> should need or require any switch blocks in the code
>
> optimally
> the code call
> (the place that uses the visitor)
> should resolve to a single indirect call
> just as if it were calling a member virtual function
>
> there should be no need for a direct jump
> to the function that the switch lives in
> which will then indirect call through a jump table
As I said, using a switch has several advantages over using the
classic visitor pattern.
It is non-intrusive, you do not suffer from the impossibility of
writing template virtual functions, and it is more efficient.
Nothing says you have to use the switch directly, you can wrap it and
only allow dispatch with a function object.
Which is what the boost.variant visitation system does, for example.
Syntactically, the resulting code isn't much different.
> > You need that std::type_info::name from different processes
> > (eventually compiled with different compilers) produce the same result
> > for a given type or something?
>
> or one program
> translated only once
> communicating peer-to-peer
> with the same program
> running on a different computer
Yes, that's basically what I was saying, if not worse, since the
processes are on different machines.
> in one translation
> all you need
> is a way to get a unique identifier for a type
>
> type -> id
> unique
Which is what typeid does.
However that uniqueness is only valid in the translated program
itself.
>
> and serialisation can automate
> this that is even still today done manually with the
>
> virtual std::string id()
Seriously, in a practical world, I guess you can suppose
type_info::name is unique.
And thanks for all the other info you provided.
I want to agree with everything said here about the importance of
run-time introspection ( reflection ) for the C++ language. In my view
it is not only important for C++ to be able to find the elements of
constructs at run-time, but it is also important that objects can be
created at run-time from a type name as a string or string literal.
The most obvious case for run-time introspection is RAD programming in
C++, aka design-time programming for properties and events. The obvious
success of RAD IDEs built from Java, .Net, and C++ extensions ( C++
Builder, QT ) in these areas, all with run-time introspection, show the
importance of this facility. As you have mentioned above, other uses in
the form of 3rd party tools will occur if C++ had run-time introspection.
Unfortunately the standard C++ committee has been nearly oblivious to
the benefits of run-time reflection so C++ will almost certainly
continue to do without it, much to C++'s own loss.
> it is also important that objects can be
> created at run-time from a type name as a string or string literal.
You simply need virtual constructors to do that, which happen to be
quite needed if you want to write polymorphic value objects correctly.
type_info being connected to the vtable, I suppose it could access
provide access to the virtual constructors too.
dizzy schrieb:
> int func() {
[snipped goto stuff]
It could be written without goto, without code duplication (every
release occurs twice), but without simple a "return" in the innermost "if":
int func()
{
int result = -1;
if(resouce_t v1 = aquire())
{
if(resource_t v2 = aquire())
{
if(resource_t v3 = aquire())
{
//various code
result = 0;
release(v3);
}
release(v2);
}
release(v1);
}
return result;
}
Frank
--
Sorry, I feel I have to but in here ... IMO your view is unsupportable
on a number of different levels:
In the first place, the design-time requirements of a RAD system
should not be confused with the run-time requirements of the
components used. Sure, you need some sort of metadata to describe how
the component looks and behaves but those data do not have to be
obtained by calling one of the components and asking it what it knows
about itself. It is only very occasionally that one wants design-time
behaviour to be present in an application at runtime, and omitting it
can reduce bloat considerably. OTOH it is quite common for a vendor
to wish to be able to supply run-time-only versions of components -
and the logical separation of design-time and run-time features eases
this considerably.
In the second place, the use of language-supported introspection to
provide design-time support in a RAD system limits the RAD tool to
using and generating that language alone. Designers of widget-sets
commonly wish to be able to make their widgets available to developers
using a wide variety of languages and tools, and they will not thank
you for tying the design-time functionality to C++ alone.
In the third place, code in that I have seen in languages that do
support introspection has seldom been clearer for its use. In an
alarming number of cases introspective facilities were clearly being
used only because the designer(s) had been too lazy (or too
incompetent) to design a workable class hierarchy in the first place.
Introspection has become a sort of "get out of gaol free" card for bad
designers. That's not to say that introspection can't be a useful
tool, just that it seems to be used badly more often than it is used
well.
> The obvious success of RAD IDEs built from Java, .Net, and C++
> extensions ( C++ Builder, QT ) in these areas, all with run-time
> introspection, show the importance of this facility.
Yes, introspection can be used quite effectively in Java -- but are
the solutions that use it good ones? Pragmatically you may say "yes"
because they generally work, but in many cases they seem to be to be
bodges that have been put in place for want of a well thought-out
design. The use of introspection in Java RAD tools does lead to
usable RAD tools, but I believe that better tools could have been
built with little or no extra effort if a system had been designed
that used external metadata descriptions of components (doubtless
using still more of the impenetrable XML files in which Java
developers seem to revel) rather than saying "hello, what are you?"
I can't speak for .NET because I haven't used it enough, but I don't
consider the tools used by C++ Builder or Qt to constitute extensions
to C++ or to provide what is normally meant by the term
"introspection" -- Qt programs, at least, can be built with a standard
(unextended) C++ compiler. Both use metadata outside the C++ language
to describe component behaviour, much as I recommended above.
For RAD development, what is needed is a standard for publication of
GUI (and other) components in a language-neutral (and if possible
platform-neutral) way. If such a standard were to be adopted by
multiple vendors, with bindings to multiple languages, it would do far
more to ease the process of RAD design, RAD development, and the
provision of RAD tools for C++ than would adding introspection to the
language.
> Unfortunately the standard C++ committee has been nearly oblivious
> to the benefits of run-time reflection so C++ will almost certainly
> continue to do without it, much to C++'s own loss.
Introspection does not come for free (any more than does RTTI), and
there is a principal in C++ that you should only pay for what you use.
Adding introspection to the language would be a violation of that
principal, and as such I'm dead against it.
--
Daniel James | djng
Sonadata Limited | at sonadata
UK | dot co dot uk
I wouldn't be comfortable with it, partly for performance reasons. Also
partly because you might actually want to send a NukeExplosionEvent and
have it treated as an ExplosionEvent by clients that didn't know about it.
I actually prefer the dynamic_cast version, at least when the number of
different types is small.
One alternative would be some kind of double-dispatch. The article
mentions this in the PS, and says that it leads to cyclic dependencies,
but these can be broken by using an abstract class. This leads to the
classic Visitor used for type discovery.
Another, possibly better approach is to simply never lose track of the
types in the first place. Instead of sending all events to a single
onEvent function, have a different function for each event.
template <typename Event>
class Handler {
public:
virtual void onEvent( const Event *pEvent ) = 0;
};
template <typename Event>
class Handlers {
typedef std::vector< Handler<Event> * > Handlers;
Handlers handlers;
public:
void addHandler( Handler<Event> *pHandler ) {
handlers.push_back( pHandler );
};
void onEvent( const Event *pEvent ) {
for (Handlers::iterator i = handles.begin(); i !=
handlers.end(); ++i)
(*i)->onEvent( pEvent );
}
}
struct ExplosionEvent;
struct EnemyHitEvent;
class EventSource {
Handlers<ExplosionEvent> explosionHandlers;
Handlers<EnemyHitEvent> enemyHitHandlers;
public:
void addExplosionHandler( Handler<ExplosionEvent> *pHandler ) {
explosionHandlers.addHandler( pHandler );
};
void onEvent( const ExplosionEvent *pEvent ) {
explosionHandlers.onEvent( pEvent );
}
// Repeat for EnemyHit.
};
class SomeHandler :
Handler<ExplosionEvent>, Handler<EnemyHitEvent> {
public:
virtual void onExplosion( const ExplosionEvent *pEvent );
virtual void onEnemyHit( const EnemyHitEvent *pEvent );
void listenTo( EventSource *pSource ) {
pSource->addExplosionHandler( this );
pSource->addEnemyHitHandler( this );
}
};
The EventSource keeps a separate Handlers<Event> for each type of event.
That way we never mix handlers for different events together, so we never
need RTTI to disentangle them. This achieves event dispatching with a
single virtual function and no casts.
These handler classes need to inherit from Handler<Event>. You can avoid
that by using the MemberFunctionHandler approach in the original article,
at a slight cost in memory and dispatching time. The article uses a
static_cast, which you can eliminate by making EventT a template
parameter of the base class.
-- Dave Harris, Nottingham, UK.
Introspection does not belong to C++ because addition of this feature would
cause massive increase in size of generated code.
Basically, name, arguments and relationships of every member of every class
and every function would have to be included in the generated code. This
could easily double or triple the size of C++ libraries.
Current C++ RTTI is so basic precisely for this reason. The only piece of
data about C++ class stored in by a program is its unique string
identifier - and this doesn't have to be actual class name (class names can
be arbitrarily long, which is unacceptable - you don't want performace of
your program decrease when you use longer class names, do you?). Usually
compilers store some mangled form of the name to conserve as much space as
possible (e.g. gcc uses "i" for ints, "f" for floats etc.).
Introspection, while a great and powerful feature would be a bad thing if
added to C++ in the shape and form you propose.
While some of the uses of introspection cannot be easily done without it,
some can be. One of them is serialization, which is often mentioned in this
thread. IMHO mechanism similar to boost::serialization is vastly superior to
"property based serialization", like you have in .NET or Java. Instead of
adding loads of properties with silly attributes (to control the way it's
serialized), you write a serialize() function. This is neater, cleaner and
faster.
Just my 2p,
Marcin
Of course one can be factories to do this with known classes, and
mapping from names to types.
I am talking about instantiating an arbitrary object given the name of
its class as a string. You only get there with the ability to do
run-time reflection.
I don't see any inherent reason why support for introspection (or,
more generally, "reflection") in C++ would necessitate a "massive
increase" in the size of compiled C++ programs. And even if we were to
decide that any increase in the size of compiled programs would be
unacceptable, it still makes more sense for us to add "no code bloat"
to the set of requirements for our proposed feature (and then to
specify the feature with that constraint), then it would be for us to
reject the entire idea out of hand.
> Basically, name, arguments and relationships of every member of every class
> and every function would have to be included in the generated code. This
> could easily double or triple the size of C++ libraries.
Then perhaps the compiled binary is not the best place from which to
obtain class information. Why couldn't a C++ compiler obtain this
information from the class declarations themselves, the ones in the
very same header files that were used to compile the library? After
all, if the header files were good enough to compile the library, then
they should certainly be good enough to provide all the information
about the library's classes that anyone would ever find useful.
Even apart from the class declarations, a great deal of information is
already compiled into a C++ program. A "mangled" C++ function name,
for example, invariably specifies the name of the class (for member
functions) and the number of types of parameters that the function
accepts. So whatever additional information might be required to
support introspection seems relatively modest - both in size and in
scope - at least when compared to the amount of information that is
already retained today.
Furthermore, much of this "generated code" would not add to the size
of the compiled program because it would end up replacing code that C+
+ programmers currently have to write themselves. In particular, a C++
program often resorts to "factory" classes in order to create objects
of various types. In a typical factory implementation, each class of
object produced first has to "register" itself so that the factory
knows how to create an object of that type. Native support for
introspection would be able to replace these "home-made" factory
classes with a Standard class.
> Current C++ RTTI is so basic precisely for this reason. The only piece of
> data about C++ class stored in by a program is its unique string
> identifier - and this doesn't have to be actual class name (class names can
> be arbitrarily long, which is unacceptable - you don't want performace of
> your program decrease when you use longer class names, do you?). Usually
> compilers store some mangled form of the name to conserve as much space as
> possible (e.g. gcc uses "i" for ints, "f" for floats etc.).
For non-built-in types, the type_info name invariably contains the
complete class name (since a shorter name could not be guaranteed to
be unique). So if long class names would cause performance problems
for introspection - then long class names must already cause
performance problems for RTTI today. Yet I have never heard anyone
shorten a C++ program's class names in an effort to make their C++
program run faster.
> Introspection, while a great and powerful feature would be a bad thing if
> added to C++ in the shape and form you propose.
I honestly don't recall saying anything about the "shape and form"
that introspection support in C++ should assume (nor - at this point -
do I have any such details in mind). I do agree that C++ introspection
in the shape and form that you described - is not how the
introspection should be specified for C++ (due to the excessive code
bloat it would entail).
In fact, deciding upon the optimal shape and form of introspection
support in C++ - requires several steps: first: specifying the
capabilities of the feature, and second: specifying the set of
requirements (or constraints) upon the implementation of the feature.
Once we have specified both the feature set and the requirements, we
can assess the feasibility of an implementation that would conform to
both. We may have to loosen the requirements or pare back the feature
set, to arrive at a feasible implementation. And only after we taken
all of these steps, can we decide whether the feature's capabilities
are useful enough - and the requirements are reasonable enough - that
the proposal as a whole is one worth making.
> While some of the uses of introspection cannot be easily done without it,
> some can be. One of them is serialization, which is often mentioned in this
> thread. IMHO mechanism similar to boost::serialization is vastly superior to
> "property based serialization", like you have in .NET or Java. Instead of
> adding loads of properties with silly attributes (to control the way it's
> serialized), you write a serialize() function. This is neater, cleaner and
> faster.
Of course - practically anything is possible if the programmer writes
the code to do it. But code is exactly what we do not want added to
our programs. Since we concluded above that code added to support
introspection is not acceptable, we should also conclude that code
added to support object serialization is not acceptable either.
Ideally, we should be able perform certain tasks - including
serializing objects - without having to write the code to do so. After
all, the safest lines of code are those that the programmer did not
write, because those are the same lines of code that were never added
to the program. There are no bugs to be found in code that does not
exist.
Greg
In RAD systems the properties and events of a component can be queried
from the outside, strings describing those properties and events can be
saved as resources or injected in source code, and therefore those
properties and events can be set on an object at run-time automatically.
In order to do this on an arbitrary object some sort of run-time
reflection is needed by the design time program/environment which is
manipulating the components at design time.
The whole point of this procedure is that design time information is
automatically carried over to the run-time environment. If you do not
see the importance or effectiveness of this, then I can not convince you
of it other than to tell you to try a RAD environment like Visual Studio
or C++ Builder. I will only say that in a component-oriented environment
this automatic wiring of properties and events is a great programming
timesaver, but I am well aware that a great majority of C++ programmers
still want to do everything by hand at run-time. Each to their own.
Finally it is perfectly possible to separate the design time and
run-time versions of components, so that only the run-time version is
shipped when a dependent module is linked to the component. This has
been done in C++ very effectively in the past in C++ Builder and a
little less effectively in .Net, but it is always possibly to do given
that the design time environment for a component can be in one shared
library while the run-time environment for a component is in another
shared library. This is not rocket science and already has prior proof
to it.
>
> In the second place, the use of language-supported introspection to
> provide design-time support in a RAD system limits the RAD tool to
> using and generating that language alone. Designers of widget-sets
> commonly wish to be able to make their widgets available to developers
> using a wide variety of languages and tools, and they will not thank
> you for tying the design-time functionality to C++ alone.
I agree that this is a limitation for cross-platform and/or
cross-language building of components. As far as cross-platform is
concerned, there are IDE's which can build components which work on more
than one OS.
>
> In the third place, code in that I have seen in languages that do
> support introspection has seldom been clearer for its use. In an
> alarming number of cases introspective facilities were clearly being
> used only because the designer(s) had been too lazy (or too
> incompetent) to design a workable class hierarchy in the first place.
> Introspection has become a sort of "get out of gaol free" card for bad
> designers. That's not to say that introspection can't be a useful
> tool, just that it seems to be used badly more often than it is used
> well.
I agree that introspection can be misused in places where polymorphism
should be designed. But misuse is never a reason to reject programming
paradigms which have been found to be extremely useful in real-life
programming environments.
Your qualms are well taken but you are talking very idealistically
whereas many programmers use RAD IDEs simply because they make
programming, and the wiring between components, much easier.
If one sits around and waits for the blessed, ideal IDE eternally one
never gets anything done.
>
>> Unfortunately the standard C++ committee has been nearly oblivious
>> to the benefits of run-time reflection so C++ will almost certainly
>> continue to do without it, much to C++'s own loss.
>
> Introspection does not come for free (any more than does RTTI), and
> there is a principal in C++ that you should only pay for what you use.
> Adding introspection to the language would be a violation of that
> principal, and as such I'm dead against it.
And if one could choose to use it or not would you still be dead set
against it.
Clearly C++ could add introspection to the language but also mandate
that compilers were free not to generate the necessary data for it to
work based on the end-users wishes at compile/link time. I assume that
extra data generated is what your objections are when you say that you
would be paying for its use even if you did not use it. Other than that
I do not believe that programmers not using a C++ introspection facility
would pay for it in extra code size or slower code speed if they did not
use it in any way.
If the "paying for it" involves slower compile/link times, so be it.
Template metaprogramming, which is the closest C++ comes to some measure
of compile time reflection, can incur a huge cost in compile/link time
but I know of few C++ programmers willing to give up the libraries using
it ( Boost etc. ) in order to just save compile/link time.
C++ has come to be fanatical in its unwillingness to consider new
features under the banner that "programmers should never pay even the
smallest iota for some feature they do not use", while other more
flexible languages have moved forward to adapt language paradigms which
help programmers achieve programming tasks more easily.
--
This vastly superior method has one significant problem: You have to
always change the serialization method, if you add other member
variables or derive from another class / structure - and you can forget
to add it to the serialize method.
> "property based serialization", like you have in .NET or Java. Instead of
> adding loads of properties with silly attributes (to control the way it's
You don't have to add silly attributes in .NET, if the default
serialization method of the different serialization objects are sufficient.
To serialize a POD structure for example, I only have to mark the
structure / class in .NET as serializable to be able to serialize the
complete class, without having to add a serialize function with multiple
redundant serialize calls.
Since I think in nearly all cases you want to serialize all member
variables, I don't think it to be a good idea, when I have to write the
most code in the default case.
> serialized), you write a serialize() function. This is neater, cleaner and
> faster.
Why is it neater of boost::serialization, having to write a serialize
function, with multiple serialize calls, instead of adding simply a
single serializable attribute to the class in .NET ?
For C++ (native code) I prefer to use code generation for serialization.
I have a preprocessing stage which allows me to generate code for my
(POD) classes / structures, which serializes them in different ways to
files, memory, console, xml etc. Additionally this allows me to be
language independent. I'm able to generate code for different languages
and I don't have to synchronize all the serialization code for all the
languages. I only add some variables to my structures and the code
generator generates the appropriate code.
> Just my 2p,
> Marcin
Andre
> [...]
>
> If the "paying for it" involves slower compile/link times, so be it.
> Template metaprogramming, which is the closest C++ comes to some measure
> of compile time reflection, can incur a huge cost in compile/link time
> but I know of few C++ programmers willing to give up the libraries using
> it ( Boost etc. ) in order to just save compile/link time.
I've dumped one version of our own libraries, which made extensive use
of template meta programming (TMP). TMP increased compilation time,
decreased portability to other compilers. And despite I used very
effective compilers, the code bloat was unbelievable.
Another huge downside of meta templates was (for me), that the code got
more and more unreadable. I dumped TMP and used code creation. The code
got much more readable and additionally it was more flexible than meta
templates.
I still use meta templates for static compilation checks, but now I try
to avoid them.
> C++ has come to be fanatical in its unwillingness to consider new
> features under the banner that "programmers should never pay even the
> smallest iota for some feature they do not use", while other more
I agree. But I feel (more and more) that in C++ I have to pay for
features I don't use. E.g. new features are preferred to be implemented
in libraries for the sake of compatibility, instead of being supported
by the compiler directly. The advantage of libraries is that the
compiler mustn't be changed. But on the other side I have to compile and
include the rather huge libraries.
Additionally some libraries, e.g. streams (iostreams, fstreams etc.) are
quite slow, because I have to pay for code that I don't use.
> [...]
Andre
A RAD design tool needs to know something about the properties of the
components that are the building blocks of the code that it can output,
certainly, but it does not need to be able to gain that knowledge by using
reflection -- that is one way, but it is not the only way, nor it is the
most general way.
> The whole point of this procedure is that design time information is
> automatically carried over to the run-time environment. If you do not
> see the importance or effectiveness of this, then I can not convince you
> of it other than to tell you to try a RAD environment like Visual Studio
> or C++ Builder.
Oh, I agree completely about the usefulness of RAD tools. My point is not
that RAD tools are not useful, but that one does not require a programming
language that supports introspection to write them.
You cite C++ Builder as an example of a RAD tool ... but C++ builder uses
C++ and libraries written in Delphi (aka Object Pascal), neither of which
supports runtime introspection, and that would seem ample proof of my
point.
There are Java RAD tools that use introspection -- that doesn't make it
the right choice for other languages. Java does already support
introspection (for better or for worse) and it is probably a little easier
for the developers of Java RAD tools to use that language facility than to
define a purpose-built mechanism for RAD components to publish their
capabilities. Java defines its own platform (the JVM) and on that platform
it is the One True Language, so issues of portability and language
neutrality don't arise -- that's a bonus for Java developers -- but in the
real world we have to consider these things.
> As far as cross-platform is concerned, there are IDE's which can build
> components which work on more than one OS.
Yes there are ... and they don't use introspection.
> I agree that introspection can be misused in places where polymorphism
> should be designed. But misuse is never a reason to reject programming
> paradigms which have been found to be extremely useful in real-life
> programming environments.
I agree with what you say ... but not with your conclusion. People have
used introspection to write RAD tools (e.g. in Java) and I'm sure that
they found it useful ... but that doesn't mean that introspection is
necessary to write such tools, or that their designs led to the most
useful tools possible for their target audience. My experience suggests
that they might have done better to eschew introspection and to provide
their metadata some other way.
> Your qualms are well taken but you are talking very idealistically
> whereas many programmers use RAD IDEs simply because they make
> programming, and the wiring between components, much easier.
Given that C++ doesn't have introspection, and given that the main thing
you say introspection is useful for is to write RAD tools, and given that
one can write RAD tools without using introspection, and given that what
we really want is good RAD tools for C++ ... isn't introspection a bit of
a red herring here?
We're the end users in this particular case so we should take the advice
that we normally offer to the non-programmer end users of our own
applications: Tell me what you want and trust me to know how best to write
it, don't get bogged down in implementation detail.
> If one sits around and waits for the blessed, ideal IDE eternally one
> never gets anything done.
Well, that's true enough ... but if we don't tell the tool vendors what we
want they won't know what to deliver. IMO writing a RAD tool is easier
than adding a complex feature like introspection to an already complex
language like C++ (AND getting it right).
> > Introspection does not come for free (any more than does RTTI), and
> > there is a principal in C++ that you should only pay for what you use.
> > Adding introspection to the language would be a violation of that
> > principal, and as such I'm dead against it.
>
> And if one could choose to use it or not would you still be dead set
> against it.
No, but it would still be very low on my list of priorities for extending
C++. I should not want the committee to spend any time on it while there
are other, more obviously useful, enhancements under discussion (and there
are plenty).
> Clearly C++ could add introspection to the language but also mandate
> that compilers were free not to generate the necessary data for it to
> work based on the end-users wishes at compile/link time.
It would need to be switchable, yes. Ideally it would be switchable on a
class-by-class basis within a given program -- perhaps using a new keyword
in the class definition.
> I assume that extra data generated is what your objections are when you
> say that you would be paying for its use even if you did not use it.
Yes, extra code and/or data present at runtime.
> If the "paying for it" involves slower compile/link times, so be it.
> Template metaprogramming, which is the closest C++ comes to some measure
> of compile time reflection, can incur a huge cost in compile/link time
> but I know of few C++ programmers willing to give up the libraries using
> it ( Boost etc. ) in order to just save compile/link time.
I wasn't thinking of the possibility of longer compile times, no, and I
can't imagine that implementing introspection would lead to a significant
increase here -- certainly not to the extend that TMP can do. I do know
people who moan incessantly about the length of time it takes to compile
complex projects that use a lot of templates, though, so I don't agree
that it's a given that compile times aren't an issue.
> C++ has come to be fanatical in its unwillingness to consider new
> features under the banner that "programmers should never pay even the
> smallest iota for some feature they do not use", while other more
> flexible languages have moved forward to adapt language paradigms which
> help programmers achieve programming tasks more easily.
If I didn't care about efficiency I might do all my work in (for example)
Python. I'm not against language/library features or programming styles
that have some runtime overhead if they bring an increase in productivity
or code clarity or safety ... but I think programmers should have the
choice whether to use such features and accept the performance hit or to
eschew the features in the name of performance. That's not fanaticism,
that's pragmatism -- sometimes efficiency is paramount, sometimes it
isn't.
--
Daniel James | djng
Sonadata Limited | at sonadata
UK | dot co dot uk
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
The problem is, C++ isn't that efficient always. Better said the
compiler is very efficient, but the overhead in some libraries / memory
management contradicts the efficiency of the compiler.
E.g. if I want to write in a type safe, platform independent way to
files I have to use self written code using C-style functions (fwrite)
or fstreams. The latter are really slow and therefore the "inefficient"
languages may be in this case faster.
Andre
--
> C++ has come to be fanatical in its unwillingness to consider new
> features under the banner that "programmers should never pay even the
> smallest iota for some feature they do not use", while other more
> flexible languages have moved forward to adapt language paradigms which
> help programmers achieve programming tasks more easily.
>
FOFL. The difference is that C++ does not just add the latest fad. As
you pointed out earlier, if you want to use C++ for RAD such extended
versions as C++ Builder provide the tools.
WG21 and J16 are much more conscious than most of how something that
looks useful can actually have serious hidden traps. You would think
that, for example, supporting multi-threading on a modern multi-core
system was straightforward, yet it has taken a well-class task group
five years to get near to a safe specification for that.
Sorry that should have been 'world-class' No idea how I cam to type
'well' for 'world'