This release has a new port to the Mac and updated support for Wine.
Dynace is an open source, OO extension to C that gives C (or C++), a
full meta-object protocol, and multiple inheritance. Dynace is designed
to solve many of the problems associated with C++ while being easier to
learn and containing more flexible object oriented facilities. Dynace is
able to add facilities previously only available in languages such as
Smalltalk and CLOS without all the overhead normally associated with
those environments.
The Dynace system also includes a GUI development system that runs under
Win32 or Wine (Linux, etc.).
Dynace runs on:
Windows
Apple Mac
Linux
FreeBSD
OpenSolaris
Blake McBride
It would be interesting to know what are the differences between
Dynace and Objective-C. If someone would not want to use C++ and instead
use a C with OO extensions, why would he choose Dynace instead of
Objective-C? What does the former have to offer that the latter doesn't?
I also think that the "Dynace vs. C++" is quite misleading at parts.
Skipping your bullshit about C++ creating unmaintainable code (compared
to C), I would like to note that this:
"Dynace is not an interpretive language. Dynace programs are compiled
with a standard C compiler. The majority of the code is just standard
compiled C code with no performance penalty. The only place Dynace
incurs a runtime cost is at the point of method dispatch. Since C++ also
incurs a runtime cost when using virtual functions, there is not much of
a difference between the performance of Dynace programs when compared to
C++."
is misleading. From what I can gather, in Dynace, like in Objective-C,
*all* method calls are dynamically bound, so they always incur a
penalty. You conveniently chose to compare them only to C++ virtual
functions and skipped commenting on non-virtual functions. Someone who
doesn't know C++ might get the wrong impression that in C++ all method
calls are also always dynamically bound.
Also this:
"Dynace comes with a complete set of fundamental classes including
classes to represent all the basic C types, a variety of container
classes (sets, dictionaries, linked lists, associations, etc.),
multi-dimensional dynamic arrays, threads, pipes and semaphores."
falsely gives the impression that C++ does not come with container
classes such as sets, dictionaries (ie. maps), linked lists,
multi-dimensional dynamic arrays, etc. This is deceiving.
It also seems to be weasel-worded about the actual costs: "C++ also
incurs a runtime cost" ... but how much, specifically?! C++'s method
dispatch mechanism, of course, is particularly simple and fast (though
of course that involves various tradeoffs).
One does get the impression that website was written by marketing...
-Miles
--
Carefully crafted initial estimates reward you not only with
reduced computational effort, but also with understanding and
increased self-esteem. -- Numerical methods in C,
Chapter 9. "Root Finding and Nonlinear Sets of Equations"
Dynace - full meta-object protocol. Every object, including classes, are
just objects treated the same. Even base classes such as Object or
Class are just instances of other classes - like Smalltalk & CLOS.
Objective-C - classes are largely compile-time objects that are treated
differently from instances of those objects. This is a much more
restrictive model.
Dynace - C syntax - no new syntax to learn
Objective-C - adds Smalltalk syntax. Increases learning curve adding no
additional expressiveness.
Dynace - written in standard C - very, very portable
Objective-C - in most cases the platform has to have an Objective-C compiler
Dynace supports true multiple inheritance
Objective-C has only single inheritance
>
> I also think that the "Dynace vs. C++" is quite misleading at parts.
> Skipping your bullshit about C++ creating unmaintainable code (compared
> to C), I would like to note that this:
>
> "Dynace is not an interpretive language. Dynace programs are compiled
> with a standard C compiler. The majority of the code is just standard
> compiled C code with no performance penalty. The only place Dynace
> incurs a runtime cost is at the point of method dispatch. Since C++ also
> incurs a runtime cost when using virtual functions, there is not much of
> a difference between the performance of Dynace programs when compared to
> C++."
>
> is misleading. From what I can gather, in Dynace, like in Objective-C,
> *all* method calls are dynamically bound, so they always incur a
> penalty. You conveniently chose to compare them only to C++ virtual
> functions and skipped commenting on non-virtual functions. Someone who
> doesn't know C++ might get the wrong impression that in C++ all method
> calls are also always dynamically bound.
I think I was clear "WHEN USING virtual functions". If they don't
understand that then they probably don't understand the point of that
whole section.
>
> Also this:
>
> "Dynace comes with a complete set of fundamental classes including
> classes to represent all the basic C types, a variety of container
> classes (sets, dictionaries, linked lists, associations, etc.),
> multi-dimensional dynamic arrays, threads, pipes and semaphores."
>
> falsely gives the impression that C++ does not come with container
> classes such as sets, dictionaries (ie. maps), linked lists,
> multi-dimensional dynamic arrays, etc. This is deceiving.
Imply what you like, and I can't account for everything others will
impute into my factual statements.
Look, you seem to be hostile to Dynace. Don't use it. There is room
for all of us. In spite of your comments, Dynace does offer solutions
to some C++ issues. All languages have trade offs. Dyance solves many
issues commonly known in the C++ world at the expense of, essentially,
causing all method calls to be virtual.
Blake McBride
In C++, if you use all virtual functions, the dispatch tables grow
geometrically. Dynace uses the same dispatching method augmented with a
method cache. You can control the tradeoff between the two. If you
grow the dispatch tables geometrically, like C++, Dynace is exactly as
fast as C++ (using virtual functions). If you fix the size of the
dispatch tables Dynace uses a method cache avoiding any dispatch table
growth but at the cost of a cache lookup.
Dynace also supports statically linked methods with no overhead, like
C++. It's just not the default. So really there is not much of a
difference.
>
> One does get the impression that website was written by marketing...
Dynace, all the code, and all the documentation were done by one person
- me. No big company.
>
> -Miles
>
In C++, if you use all virtual functions, the dispatch tables grow
geometrically. Dynace uses the same dispatching method augmented with a
method cache. You can control the tradeoff between the two. If you
grow the dispatch tables geometrically, like C++, Dynace is exactly as
fast as C++ (using virtual functions). If you fix the size of the
dispatch tables Dynace uses a method cache avoiding any dispatch table
growth but at the cost of a cache lookup.
Dynace also supports statically linked methods with no overhead, like
C++. It's just not the default. So really there is not much of a
difference.
>
> It also seems to be weasel-worded about the actual costs: "C++ also
> incurs a runtime cost" ... but how much, specifically?! C++'s method
> dispatch mechanism, of course, is particularly simple and fast (though
> of course that involves various tradeoffs).
>
> One does get the impression that website was written by marketing...
Dynace, all the code, and all the documentation were done by one person
- me. No big company, and I've never had a marketing position or class.
>
> -Miles
>
Perhaps you mean that the size of all vtables for all classes, for the
never-occurring-in-practice case of all classes being in the same single
inheritance chain, grows as the square number of classes.
Most classes have at most some tens of methods, so even for that case it's about
hundreds of *bytes* per class -- not per instance, but maximum per class.
A typical desktop system has between one and four billion bytes of RAM.
Cheers & hth.,
- Alf
Your description of the vtables is as I understand them. I have seen
production system where the vtables grew to 1 MB.
It's easy to abuse any language feature, e.g. via code generation (which might
mean recursive templates).
That doesn't say anything about the language.
It's not a practical problem, rather, the opposite: something so "free" that you
should ordinarily not think about it.
If you think otherwise then someone's misinformed you, and/or you've
misunderstood something basic.
This kind of thing is easy to test, by the way.
This is probably because:
- class hierarchies are somehow monolithic (large hierarchy)
- class hierarchies abuse of multiple inheritance (of abstract
classes)
- class hierarchies abuse virtual inheritance
- template classes do not share enough generic implementation
Regards,
ld.
I'm not hostile to Dynace. I'm hostile to your arguments in your
webpage badmouthing C++ for false reasons.
Exactly what do you expect when you come to a C++ group and promote
your own C extension and in your webpage you basically say that "C++
sucks, C rules, and Dynace rules even more"? Your views about C vs. C++
may be shared among prejudiced C hackers, but they are not shared by me
(and many other C++ programmers, I'm sure).
> All languages have trade offs. Dyance solves many
> issues commonly known in the C++ world at the expense of, essentially,
> causing all method calls to be virtual.
And, I assume, forcing each object to be allocated dynamically, making
the creation of objects slower and making them consume more memory.
Imagine you have something like this in C++:
class Pixel
{
unsigned char red, green, blue, alpha;
public:
// public methods here, none of which are virtual
};
Then you do something like this: std::vector<Pixel> image(10000000);
(and maybe initialize with some image data, or whatever).
In C++ that vector will consume about 40 megabytes of memory. How much
would the equivalent code in Dynace consume (using the same level of
abstraction, ie. no cheating by using a C struct as the Pixel type)?
Also, even with virtual methods, I don't see how you can have them as
fast as the ones in C++ given that, if I understood correctly, all
object pointers are completely opaque (as your webpage prominently
advertises, the entire class declaration is in the .c file rather than
in a header file, which would mean that object pointers must be
completely opaque, ie. the compiler does not see the class structure
when it makes a method call using that pointer).
In C++, since the compiler sees the entire class structure, making a
virtual function call is basically reading a function address from a
fixed offset in the virtual table, and jumping to that address. If,
however, the compiler would not see the class structure, it has no way
of knowing from the opaque pointer what this offset might be, without
doing more elaborate operations to resolve it.
But these operations could be as fast as you mentioned, specially on
modern architectures. In fact, as soon as you consider single
inheritance, what you say is true: it's almost impossible to beat C++
virtual call. But in real programs, your concrete classes will derive
from more than one abstract class (interfaces) to ensure better
flexibility, and multiple inheritance requires offsets adjustment
which is mainly sequential even on modern arch:
obj->vtable->fun( obj + obj->vtable->offset )
Moreover, if you use virtual inheritance (as you should for
interfaces), more than one offset adjustment will occur. In the end,
compared to a well designed dispatcher, you will get something running
more or less at the same speed as C++ virtual call (on modern arch).
I did some measurements for COS (C Object System) vs C++ vs Objective-
C. The results are described in the following papers in page 9 for
http://cos.cvs.sourceforge.net/viewvc/cos/doc/cos_draft-dls09.pdf.gz
and in page 14 for
http://cos.cvs.sourceforge.net/viewvc/cos/doc/cos_draft-oopsla09.pdf.gz
The conclusion is that dynamic dispatch (lookup) can be as fast as
late dispatch (vtable) because most of the operations can be
parallelized on modern cpu (thanks to the compiler, no special code is
required). And the special design of COS dispatcher allows it to be
x2-3 faster than the one of Dynace and x1.6 faster than the one of
Objective-C.
Now, all these remarks does not invalidate your other remark about
boxing / unboxing primitive types. It's true that it's more
complicated to implement abstractions like your Pixel class and it
will consume more memory (x3 more for your example in COS, unless you
write an Image class). So in principle, you will have to implement a
higher-level Image class to reach the same efficiency.
To conclude, there is a tradeoff in both approach, since the COS/
Objective-C/Dynace way do not allow efficient low-level abstraction
like your Pixel class, but they do allow (in particular COS) to design
powerful components quite hard (or impossible) to implement in C++.
It's up to the developer to choose the right tools.
a+, ld.
<snip>
> sucks, C rules, and Dynace rules even more"? Your views about C vs. C++
> may be shared among prejudiced C hackers, but they are not shared by me
> (and many other C++ programmers, I'm sure).
<snip>
Please take your prejudice against C programmers to an advocacy group
somewhere. I doubt that the attitudes and opinions of Blake are any more
representative of C programmers than they are of C++ programmers. In
fact, many of us who program in C program in many other languages as
well, so we are no more C programmers than we are Java, Perl, XSLT or
anything else programmers.
--
Flash Gordon
There is no prejudice against "C programmers" in Juha's comments.
Please don't confuse the expressions "C programmers" and "prejudiced C
hackers".
> to an advocacy group
> somewhere. I doubt that the attitudes and opinions of Blake are any more
> representative of C programmers than they are of C++ programmers. In
> fact, many of us who program in C program in many other languages as
> well, so we are no more C programmers than we are Java, Perl, XSLT or
> anything else programmers.
V
--
Please remove capital 'A's when replying by e-mail
I do not respond to top-posted replies, please don't ask
not all C programmers hold these kinds of views.
granted, I have some of my own reasons for not using C++ so much over C, but
performance is not one of them...
a simple explanation (for my case) is that C++ is much more difficult to
"tool" than C, and that the compiler output is very much more complicated
than the equivalent C-compiler output (however, this is inherent with nearly
any attempt to move beyond the core C feature-set).
granted, most devs probably don't care so much about mechanically processing
their source or mucking around at the level of assembler and machine code,
so it is not such a big deal.
>> All languages have trade offs. Dyance solves many
>> issues commonly known in the C++ world at the expense of, essentially,
>> causing all method calls to be virtual.
>
> And, I assume, forcing each object to be allocated dynamically, making
> the creation of objects slower and making them consume more memory.
>
> Imagine you have something like this in C++:
>
> class Pixel
> {
> unsigned char red, green, blue, alpha;
>
> public:
> // public methods here, none of which are virtual
> };
>
> Then you do something like this: std::vector<Pixel> image(10000000);
> (and maybe initialize with some image data, or whatever).
>
partial solution: don't do this...
> In C++ that vector will consume about 40 megabytes of memory. How much
> would the equivalent code in Dynace consume (using the same level of
> abstraction, ie. no cheating by using a C struct as the Pixel type)?
>
can't answer for dynace, but in my framework, on x86, approx 480MB would be
used, and on x86-64, about 640MB (this ignores additional overheads, such as
linear heap inflation, ...).
so, maybe 800MB-1GB?...
there is also a risk (in my case), that doing this would cause the framework
to blow up (yeah...).
thus the answer: don't do this... (or, in my case, at least use "unmanaged
classes" and maybe an option like "__nortti"...).
as for C++:
I would have thought it would have been 80MB or more (or 160MB on x86-64),
unless one were to disable RTTI (allowing the VTable to be omitted)?...
> Also, even with virtual methods, I don't see how you can have them as
> fast as the ones in C++ given that, if I understood correctly, all
> object pointers are completely opaque (as your webpage prominently
> advertises, the entire class declaration is in the .c file rather than
> in a header file, which would mean that object pointers must be
> completely opaque, ie. the compiler does not see the class structure
> when it makes a method call using that pointer).
>
> In C++, since the compiler sees the entire class structure, making a
> virtual function call is basically reading a function address from a
> fixed offset in the virtual table, and jumping to that address. If,
> however, the compiler would not see the class structure, it has no way
> of knowing from the opaque pointer what this offset might be, without
> doing more elaborate operations to resolve it.
granted, yes...
can't say about dynace (I haven't looked into it...).
but, in my case, virtual calls are handle-based, and involve a bunch of
other internal machinery (and overheads), but thus far I have kept it
"tolerable" on benchmarks (could be optimized further later though).
on x86, some stages in the process involve "shortcuts" generally involving
automatically generated thunks.
in my framework, I split classes into 2 major camps:
managed classes;
and unmanaged classes.
managed classes are basically heap-allocated, and behave more or less like
the Java/C# model.
they are also C-accessible via APIs.
unmanaged classes are more or less hacked-over structs, and are a
simplification of the C++ model. as-is, virtual inheritence is not, and
likely may not be, supported (MI is likely to be bad enough already...).
however, they may allow generating faster code, as many of the internal
overheads can be avoided (errm... there are a few...).
likewise for accessing methods via the ASM-level ABI, where a good deal of
the machinery "could" be handled at link-time (granted, as-is, this part of
the ABI mostly just defers to the C-based API).
I may have a C++ frontend (it is "in development", along with my Java and C#
frontends), but very possibly what it would accept would only be a subset.
note that as an "arbitrary" restriction, managed and unmanaged classes may
not inherit from each other, ... (however, both may implement interfaces,
which in my case, would be transparent to the class, as "RTTI" would be used
instead, and the iface calls would be handled similarly to managed iface
calls in my case, AKA, dispatch via a big-ass hash table...).
in any case, interface calls (worst case) are about 1200ns in my case, which
I estimate as somewhere around 2940 clock-cycles on my computer (in my
Win64-based tests, it was faster on x86...).
or such...
(...)
But OOP can be done in C. Generic programming on the other hand ...
Good luck using macros for that! It's "possible", but my many
inquiries into whether there is a macros-based STL analog in the C
world turned up nothing.
Hmm, I guess that makes _you_ marketing (amongst other things)... :)
-Miles
--
Faith, n. Belief without evidence in what is told by one who speaks without
knowledge, of things without parallel.
>> Imagine you have something like this in C++:
>>
>> class Pixel
>> {
>> unsigned char red, green, blue, alpha;
>>
>> public:
>> // public methods here, none of which are virtual
>> };
>>
>> Then you do something like this: std::vector<Pixel> image(10000000);
>> (and maybe initialize with some image data, or whatever).
>>
>
> partial solution: don't do this...
>
>> In C++ that vector will consume about 40 megabytes of memory. How much
>> would the equivalent code in Dynace consume (using the same level of
>> abstraction, ie. no cheating by using a C struct as the Pixel type)?
>>
>
> can't answer for dynace, but in my framework, on x86, approx 480MB would
> be used, and on x86-64, about 640MB (this ignores additional overheads,
> such as linear heap inflation, ...).
>
> so, maybe 800MB-1GB?...
I admit I know nothing about vtables and whatever, but why would it be
necessary to store any extra data in each /instance/ of the class?
--
Bart
The problem with languages like Objective-C (and, I assume, Dynace) is
that every object must be allocated dynamically (with whatever function
the language offers for this purpose, but which is basically completely
equivalent to malloc() + initialization), and consequently each object
has at least one pointer pointing to it.
Allocating an object dynamically always has some space overhead to it
for the simple reason that the memory allocator used by the compiler has
to store some ancillary data on each allocated block of memory. For
example the C-lib memory allocator in Linux (in a 32-bit system)
requires 4-12 bytes of ancillary data per allocated block of memory (the
minimum allocation size is 16 bytes, and everything bigger than that is
aligned to an 8-byte boundary, with the size of the allocated block
being the requested size + 4 bytes).
If the object has a vtable pointer in it (in the cases where the
language needs it), that adds the size of one pointer to the object size
behind the scenes.
In this particular example (ie. the "Pixel" class consisting of 4
bytes) the object itself, when allocated dynamically, would require 16
bytes of memory in a (32-bit) Linux system, plus the pointer used to
handle it. Thus each 'Pixel' object requires 20 bytes of memory. And
this assuming the vtable pointer is not needed. If that is needed by the
language, then each object requires 24 bytes of memory.
In C++, in the optimal case (as the one I gave as example, ie. no
dynamic binding needed, a std::vector<Pixel> as data container), each
object requires only 4 bytes of memory. (And this is so even in 64-bit
systems.)
Of course lesser memory usage is not the only advantage: Allocating 10
million objects dynamically, one at a time, is very expensive, even when
using some optimized memory allocator. C++ classes allow allocating all
the 10 million objects as one single memory block (which is what the
std::vector in the example does), in other words, there is only one
allocation rather than 10 million. This is a HUGE time saver.
As I commented in the example, no virtual functions in the 'Pixel'
class. That means that no vtable nor vtable pointer is generated for
that class. Thus instances of that class (when in an array) require only
4 bytes of memory.
I like C++ because it gives you the option of not including RTTI in a
class. In many cases this can be a significant memory saving.
I think that's debatable.
You can simulate object-oriented programming in C to an extent, but
since the language has basically no support, it will inevitably be
rather "hacky" and complicated.
The gtk+ library for C is a good example of a C library which
extensively uses OO techniques. However, the resulting code is
necessarily uglier and less efficient than the equivalent C++ code would
be. (For example, when using gtk+, every single pointer cast from one
type to another, even when it's from a derived object type to a base
object type, is done dynamically at runtime, with runtime checks.)
> Generic programming on the other hand ...
>
> Good luck using macros for that! It's "possible", but my many
> inquiries into whether there is a macros-based STL analog in the C
> world turned up nothing.
There are many things doable with templates which are impossible to do
in C with precompiler macros. A very trivial example:
//--------------------------------------------------------------
template<typename T>
void foo(T value)
{
std::cout << "The value is: " << T << std::endl;
}
//--------------------------------------------------------------
A slightly more complicated example:
//--------------------------------------------------------------
template<typename T>
void foo()
{
std::cout << "The specified type is"
<< (std::numeric_limits<T>::is_integer ? "" : " not")
<< " an integral type.\nThe maximum value which can "
<< "be represented by it is: "
<< std::numeric_limit<T>::max() << std::endl;
}
//--------------------------------------------------------------
You mean << value << here?
--
bart
As I mentioned earlier, there is alternatives. In COS, your Pixel
class would take 12 bytes on 32-bit and 64-bit arch and it could use
automatic storage as well. This is even recommended for local objects
with value semantic (like Pixel).
> Of course lesser memory usage is not the only advantage: Allocating 10
> million objects dynamically, one at a time, is very expensive, even when
> using some optimized memory allocator. C++ classes allow allocating all
> the 10 million objects as one single memory block (which is what the
> std::vector in the example does), in other words, there is only one
> allocation rather than 10 million. This is a HUGE time saver.
Why do you think that alternatives cannot do the same? Or use
automatic objects as you do C++?
a+, ld.
>
> The gtk+ library for C is a good example of a C library which
> extensively uses OO techniques. However, the resulting code is
> necessarily uglier and less efficient than the equivalent C++ code would
> be. (For example, when using gtk+, every single pointer cast from one
> type to another, even when it's from a derived object type to a base
> object type, is done dynamically at runtime, with runtime checks.)
>
"uglier" is debatable but... what's the problem with casts?
Wrong.
> The gtk+ library for C is a good example of a C library which
> extensively uses OO techniques.
This is the worst example I know. Heavy, slow, odd.
> However, the resulting code is
> necessarily uglier and less efficient than the equivalent C++ code would
> be. (For example, when using gtk+, every single pointer cast from one
> type to another, even when it's from a derived object type to a base
> object type, is done dynamically at runtime, with runtime checks.)
This is related to gtk+, not OOP in C.
> > Generic programming on the other hand ...
>
> > Good luck using macros for that! It's "possible", but my many
> > inquiries into whether there is a macros-based STL analog in the C
> > world turned up nothing.
>
> There are many things doable with templates which are impossible to do
> in C with precompiler macros.
Do you _really_ know what is doable with C macros?
> A very trivial example:
>
> //--------------------------------------------------------------
> template<typename T>
> void foo(T value)
> {
> std::cout << "The value is: " << T << std::endl;}
>
> //--------------------------------------------------------------
>
> A slightly more complicated example:
>
> //--------------------------------------------------------------
> template<typename T>
> void foo()
> {
> std::cout << "The specified type is"
> << (std::numeric_limits<T>::is_integer ? "" : " not")
> << " an integral type.\nThe maximum value which can "
> << "be represented by it is: "
> << std::numeric_limit<T>::max() << std::endl;}
>
> //--------------------------------------------------------------
Polymorphism can replace template here and hence be done in C. A C++
MTP example would be better to show something not possible in C at
compile time. But this not OOP.
a+, ld.
most of this overhead would be due to 2 major things:
memory allocation overhead;
object headers.
for example, we can first note that my GC allocates memory in 16-byte
chunks.
secondly, the MM/GC uses an 8 byte header.
thirdly, the C/I-OO system uses another 16-byte header (x86, 32-bytes
x86-64).
basically, the object header holds:
2 pointers: one to the current class, and to the current class-version.
a pointer to the payload;
a pointer to an (optional) auxilary header (used mostly for P-OO features).
it is worth noting that these headers could be reduced some, and the payload
stored inline, but at a likely cost to performance (and, also requiring some
alteration to the current OO machinery).
note that the main reason the payload goes in its own allocation is so that
it can be reallocated as-needed (this being an issue both with dynamic
class-layout modification, as well as with Prototype-OO, which may
dynamically add more slots to an object...). note that the "class-version"
holds both the VTable, as well as the field-offset-table.
so, 8+16=24, which pads to 32-bytes.
my C/I-OO system then uses another allocation for the payload, so there goes
another 16 bytes.
the result is each object taking 48 bytes (which is a crapload bigger when
multiplied with 10000000).
it is worse on x86-64, as each object would take 64 bytes (in the current
MM/GC, its header remains fixed at 8 bytes).
this overhead is much lower (in relation) for objects which are not
trivially small...
> --
> Bart
oh, ok.
I had thought RTTI was on by default... (unless disabled by a command-line
option or otherwise...).
this would mean an object would still contain a vtable pointer, where the
vtable itself would contain a pointer to the RTTI_Info (or whatever it is
called, I forget) structure (this being emmitted by the compiler).
checking online:
oh... it seems RTTI is only used in cases where one also has virtual
methods...
but, yes, no RTTI means smaller object.
my personal preference though is to just not use classes in these cases...
> (...)
> But OOP can be done in C. Generic programming on the other hand ...
Well, you can definitely create highly minimalist generic interfaces fairly
easily in C:
http://clc.pastebin.com/f52a443b1
> Good luck using macros for that! It's "possible", but my many
> inquiries into whether there is a macros-based STL analog in the C
> world turned up nothing.
Well, you can also do something crazy like:
http://h30097.www3.hp.com/cplus/6026pro_genr.html
funny:
______________________________________________________________________
#define CONCAT_RAW(mp_token1, mp_token2) \
mp_token1 ## mp_token2
#define CONCAT(mp_token1, mp_token2) \
CONCAT_RAW(mp_token1, mp_token2)
#define DECLARE_STACK(mp_name, mp_type) \
void CONCAT(mp_name, _stack_push) ( \
mp_type* const self, \
mp_type const node \
);\
mp_type \
CONCAT(mp_name, _stack_pop) ( \
mp_type* const self \
);
#define DEFINE_STACK(mp_name, mp_type, mp_pname) \
void CONCAT(mp_name, _stack_push) ( \
mp_type* const self, \
mp_type const node \
) { \
node->mp_pname = *self; \
*self = node; \
} \
mp_type \
CONCAT(mp_name, _stack_pop) ( \
mp_type* const self \
) { \
mp_type node = *self; \
if (node) *self = node->mp_pname; \
return NULL; \
}
#include <stdlib.h>
DECLARE_STACK(foo, struct foo*)
struct foo {
struct foo* next;
};
DEFINE_STACK(foo, struct foo*, next)
static struct foo* g_stack = NULL;
int main(void) {
foo_stack_push(&g_stack, malloc(sizeof(*g_stack)));
foo_stack_push(&g_stack, malloc(sizeof(*g_stack)));
foo_stack_push(&g_stack, malloc(sizeof(*g_stack)));
foo_stack_push(&g_stack, malloc(sizeof(*g_stack)));
free(foo_stack_pop(&g_stack));
free(foo_stack_pop(&g_stack));
free(foo_stack_pop(&g_stack));
free(foo_stack_pop(&g_stack));
return 0;
}
______________________________________________________________________
Generic type-safe intrusive stack? lol.
it depends on how one does it...
"some" options get hacky and complicated...
very often though, things just get terribly verbose...
usually the "hacky and complicated" results from people trying in a
misguided attempt for "maximum performance" and thus implementing their
whole damn object system in terms of nested structs and casting, ...
the other alternative is to force an opaque API, which can largely avoid
much of the horror, but does not have the same air of "maximum performance"
about it (typically because accessing a field typically involves a function
call, a switch, and maybe a few pointer-ops...).
the switch can be eliminated if one is willing to sacrifice features, or
require a separate API function for each type of field.
> The gtk+ library for C is a good example of a C library which
> extensively uses OO techniques. However, the resulting code is
> necessarily uglier and less efficient than the equivalent C++ code would
> be. (For example, when using gtk+, every single pointer cast from one
> type to another, even when it's from a derived object type to a base
> object type, is done dynamically at runtime, with runtime checks.)
>
the GTK+ library is a good example of almost pure horror...
then again, in GPL land there are much worse offenders in the land of "pure
horror"...
as for the common horror and hackiness seen when people attempt things like
this:
often, this is the result of people trying to move directly from an OOPL to
C, and just trying to (directly) force their existing practices onto C,
rather than adapting to a more "C-appropriate" approach to problems...
for example, if we take OO in the more abstract sense (AKA, in the more
'philosophical' sense promoted by 'H.S. Lahman' and friends over in
comp.object), then the problem need not turn into an ugly mess (since, hell,
there is no real reason that OOP should look anything like the approaches we
usually see in OOPL's...).
for example, see the Linux kernel, which would seem to be applying some
amount of 'OOP' as well, but has generally refrained from the obtuse
hackiness of GTK and friends, mostly because problems are abstracted and
modularized, and not because of large amounts of hacky "struct-ninjitsu"...
granted, in general the code-quality in most of open-source land is not
exactly to the highest standards...
>> Generic programming on the other hand ...
>>
>> Good luck using macros for that! It's "possible", but my many
>> inquiries into whether there is a macros-based STL analog in the C
>> world turned up nothing.
>
> There are many things doable with templates which are impossible to do
> in C with precompiler macros.
the bigger issue (in your examples) is not the lack of templates, but the
lack of iostream...
but, on the same token, I can note that there are many things one can do is
LISP macros which are impossible to do with C++ templates... (due to, for
example, LISP macros being turing complete and having full access to the
language...).
as well, although the C-preprocessor is limited, it does not rule out the
possibility of custom preprocessors (but, then one can debate that by the
time such a tool includes certain features, such as a full parser, it is no
longer a preprocessor, rather it is a compiler...).
>>>> class Pixel
Ok, so mostly to do with allocation then, rather than vtables and such.
My current project would use 16+16 bytes per item (320million bytes), mainly
because each element would be a variant. I could squeeze that down to 16,
but in practice such an array would just be a linear, homogeneous list of
4-byte pixel types, total size 40million bytes in a single allocated block.
Plus 16 bytes for the variant owner array.
That doesn't stop the pixel type/class having it's own methods, although I
haven't gone too far along the oop route so not sure what other requirements
there might be.
However anything that would inflate a data structure by up to 25x (whoever
mentioned 1GB) without an easy, more efficient alternative would need
serious investigation.
--
Bart
The part about each object having at least one pointer
pointing to it is not correct at least for the way I
use the Boost Intrusive containers. The objects stored
in the containers are deleted by taking their addresses.
I agree with most of the rest of your post and advocate
using vector and deque and the Boost Intrusive
containers rather than list, (multi)set or (multi)map.
Brian Wood
Ebenezer Enterprises
www.webEbenezer.net
Me too. And for about the same reason I like the Boost Intrusive
containers as they give the option of not having a data member
that is incremented/decremented every time elements are added or
removed.
Because alternatives usually want to use a more "pure" object-oriented
paradigm, where *all* objects are always dynamically bound, and thus
*any* reference to an object can also be a reference to any other object
in the same class hierarchy (and with all methods being virtual, of course).
You can't achieve this with value semantics like the ones used so
extensively in C++ because you would end up with, among others, the
problem of slicing.
In other words, since all objects (or, more precisely, all references
to objects, as you don't have the object itself per se, only a reference
to it) are dynamic, you can't have eg. an array of objects, only an
array of references to objects (which themselves are dynamically
allocated, one by one).
Well, it's not like alternatives couldn't do the same. It's just that
in my experience no alternative does. I don't know of any other
programming language supporting objects where they are treated with
value semantics rather than reference semantics.
You need *some* pointer to the object if you want to destroy it. If
you have dropped all pointers to it, then it's an orphan object which
you can't access anymore from anywhere (the only mechanism which can
still access it is a garbage collection engine).
Even if you use an intrusive container, you are using pointers to the
objects, and they naturally take space.
What would you use instead, a struct? In C++, that is still a class.
REH
I'm honestly a bit puzzled by that opinion.
First you learn that RTTI is not mandatory in C++, which makes
instances of the class spacewise optimal (very similar to a struct
containing the same member variables), but then you still say that you
prefer not to use a class in these cases. Why not?
The advantage of a C++ class (with no RTTI) over a C struct is that
you can have a public and a private interface, which increases
abstraction and modularity. You can ever *inherit* from such a class
(and still have no RTTI overhead), which gives some design advantages.
You could achieve almost the same result by using a plain C struct and
a bunch of functions taking instances of it as parameter (as a
substitute for member functions), but in that case you are lessening the
modularity and abstraction of that construct. (C structs also lack other
beneficial properties of C++ classes/structs, such as constructors and
destructors, which make them a lot easier to use eg. in arrays, not to
talk about safety if the struct has eg. pointers to dynamically
allocated memory or such.)
A C++ class, even without RTTI, is a very powerful tool. You could
have, for example, a string class which automatically allocates and
deallocates itself (when it goes out of scope), automatically copies
itself when passed by value (using any of a number of techniques, eg.
the copy-on-write mechanism), checks for access boundaries, etc. All
this without any space overhead from RTTI. Thus the space taken by one
of these string objects can be the same as a char* (which is what you
would usually use in C).
I honestly don't understand why so many C programmers are so
prejudiced against C++. C++ is a wonderful expansion to C.
As if one data member was that much of overhead...
Yes. Braino.
I have yet to see a clean solution.
> Do you _really_ know what is doable with C macros?
I'm pretty sure you cannot resolve the proper format string to use
with printf() to print something you got as a macro parameter
(especially if that something is eg. a struct instantiation). I'm also
pretty sure you cannot eg. resolve whether a parameter is of an integral
or a non-integral type.
> But this not OOP.
So?
Btw, AFAIK it has been proven that the C++ template metalanguage is
turing-complete as well... :)
You can do surprising things with template metaprogramming (all at
compile time), such as linked lists (with many operations familiar from
lisp, such as getting the first element and the list tail, etc), binary
trees, resolving whether a given integer is prime or not... I once even
wrote an ascii mandelbrot set generator using template metaprogramming,
which the compiler calculated at compile-time (the resulting program
just printed the resulting string and nothing more).
right, but this does not prevent value semantic.
> You can't achieve this with value semantics like the ones used so
> extensively in C++ because you would end up with, among others, the
> problem of slicing.
In many languages, value semantic means also monomorphic type for well
formed classes so slicing is generally not an issue. COS allows more
complex constructions to allow mixing value and polymorphic semantic.
> In other words, since all objects (or, more precisely, all references
> to objects, as you don't have the object itself per se, only a reference
> to it) are dynamic, you can't have eg. an array of objects, only an
> array of references to objects (which themselves are dynamically
> allocated, one by one).
You can have an array of objects allocated in one chunk as soon as all
objects have the same type.
> Well, it's not like alternatives couldn't do the same. It's just that
> in my experience no alternative does. I don't know of any other
> programming language supporting objects where they are treated with
> value semantics rather than reference semantics.
COS mixes the two semantic, all objects are dynamically bound, but
those with value semantic can also use automatic storage. In fact,
value semantic is a requirement to have automatic storage in COS
because it does not call automatically the destructor whenever
appropriate.
a+, ld.
this inflation risk is due to how the MM/GC is itself implemented.
there is a constant 6.25% linear overhead for small objects, as well as a
"chunking" overhead (the entire cell-heap is managed in terms of larger 1MB
chunks, each of which has their own headers and management structures, ...).
all of these costs add up...
so, as noted, it IS better to just allocate data of this sort as a single
larger memory object (a single 40MB object), rather than as a huge number of
tiny objects...
> --
> Bart
in C++ yes...
but, there is another overlooked possibility:
a flat linear array of bytes...
as another has noted:
we can have an "image" class, not a "pixel" class...
> You can do surprising things with template metaprogramming (all at
> compile time), such as linked lists (with many operations familiar from
> lisp, such as getting the first element and the list tail, etc), binary
> trees, resolving whether a given integer is prime or not...
I think you've put your finger on the problem with template
metaprogramming: people tend to do surprising things with it.
Programming should not be surprising.
--
Ben Pfaff
http://benpfaff.org
Then you diminish the reusability of the "pixel" class for other
purposes than a 2D bitmap image.
Did you read the paper I mentioned?
http://cos.cvs.sourceforge.net/viewvc/cos/doc/cos_draft-dls09.pdf.gz
> > Do you _really_ know what is doable with C macros?
>
> I'm pretty sure you cannot resolve the proper format string to use
> with printf() to print something you got as a macro parameter
> (especially if that something is eg. a struct instantiation). I'm also
> pretty sure you cannot eg. resolve whether a parameter is of an integral
> or a non-integral type.
>
> > But this not OOP.
>
> So?
this thread is about OOP in/for C in the way of Dynace, not about C++
non-OOP features like generic (generative) programming. I have never
said that C macros can do the same as C++ template. My statement is
that they can do enough to implement clean _OOP_ in C.
a+, ld.
Sure, but there is the run time aspect as well. If an
application adds a billion elements to a container and
then goes through them and eliminates all but 200,000
and you don't need to know the size of the container,
that's a lot of incrementing and decrementing that adds
nothing of value to the application. The point is that
if you don't need to know the size, having the option
to not have a member yields a slightly better application.
I use quite a few containers so as they say, "Every little
bit helps."
Have you actually tested it in practice whether that counter makes any
significant difference? There are much heavier operations involved in
creating and destroying objects dynamically than some individual counter
integral.
I think you are playing with words.
That was not your original statement, and you waste everyone's time by
being dishonest.
Here is the original context:
JUHA: There are many things doable with templates which are impossible
to do in C with precompiler macros.
YOU: Do you _really_ know what is doable with C macros?
it is also the "concept" of a class.
as we can see, in languages like Java and C#, classes are used purely for
heap-managed behavioral objects (and C# uses 'struct' for linear
pass-by-value objects, whereas Java lacks this concept).
the simple solution then is to not use class in this way, if anything, for
the simple reason of conceptual cleanliness...
similarly, things like pixels, ... are IMO better represented as a single
large flat array anyways, AKA: no classes or 'Vector' template, just a
single big flat glob of memory...
> The advantage of a C++ class (with no RTTI) over a C struct is that
> you can have a public and a private interface, which increases
> abstraction and modularity. You can ever *inherit* from such a class
> (and still have no RTTI overhead), which gives some design advantages.
>
or, in these cases, one can forsake even using a struct, as mentioned above,
which may have other advantages.
for one thing, a "pixel" may not be regarded as a distinct entity in the
first place, and as such, having a unique object for it, may make no real
sense.
for another thing, for many tasks the "linear flat array" approach may allow
a faster implementation (and cleaner code), mostly because the code is not
filled with bunches of individual "per-object" manipulations.
consider one is implementing something like DCT, FFT, or the DWT, does the
"class" offer any real advantage?... how about an LPC-based compressor (for
audio/video)?...
I would say no...
likewise for many types of geometric code, ...
if one writes a 3D modeller or animator, one may suddenly find that dividing
things into discrete objects is not to ones' advantage, even though a
3D-modeled object may "seem" this way. given the wide variety and types of
operations performed, it is generally better to build the model from an
essentially "relational" structure (read, lots of parallel and
interconnected arrays), rather than in an object-based manner.
this greatly simplifies the implementation of operations, such as:
different view modes;
selection; deletion; translate / rotate / scale; surface subdivision;
extrusion; ...
as well as things like weighted matrix transforms / interpolation / ...
drawing the model as it is when weighted according to a set of weighted
bones and a given set of position-matrices.
...
likewise goes for many tasks in real-time physics simulation, ...
there ARE cases where, IMO, where a class or struct is an ill-advised
strategy.
attempting to do these tasks using an object-based strategy is horrible, and
the code comes out scary and absurdly complicated...
even things like scene rendering are not ideal, as with a non-trivial
renderer one may soon find that an object-based approach does not scale well
(as it can lead to combinational complexity issues, ...).
> You could achieve almost the same result by using a plain C struct and
> a bunch of functions taking instances of it as parameter (as a
> substitute for member functions), but in that case you are lessening the
> modularity and abstraction of that construct. (C structs also lack other
> beneficial properties of C++ classes/structs, such as constructors and
> destructors, which make them a lot easier to use eg. in arrays, not to
> talk about safety if the struct has eg. pointers to dynamically
> allocated memory or such.)
>
these are more about design, not so much about language features...
as I see it though, one generally modularizes systems and operations, not
discrete objects...
(AKA: objects may be "synthetic", as in there is no singular in-memory
representation of an object, rather it exists more as a systematic
inference...).
> A C++ class, even without RTTI, is a very powerful tool. You could
> have, for example, a string class which automatically allocates and
> deallocates itself (when it goes out of scope), automatically copies
> itself when passed by value (using any of a number of techniques, eg.
> the copy-on-write mechanism), checks for access boundaries, etc. All
> this without any space overhead from RTTI. Thus the space taken by one
> of these string objects can be the same as a char* (which is what you
> would usually use in C).
>
> I honestly don't understand why so many C programmers are so
> prejudiced against C++. C++ is a wonderful expansion to C.
well, I am not opposing C++ here, rather, I am opposing that what you are
promoting doing with it here actually makes any sense...
but, it is worth noting that I typically don't individually manage strings
either...
the "object" most people regard as a string, in my case, is actually
conceptually split into several different sorts of entity ("string" is then
more a term to describe these sorts of entiries, rather than a singular
concept).
for example:
string as a stream of characters;
string as an atomic/immutable datum;
...
regarding character-streams and atomic datums as distinct, rather than
insisting that both cases be handled via a "string object" actually works a
lot better in getting things done.
similarly, most "string object" implementations tend to like regarding
strings as "pass-by-reference mutable arrays", which is, as I see it, one of
the least useful strategies, vs, say:
atomic value;
character-input stream;
character-output stream.
an "output stream" object could exist, which could then be converted into a
string, which is, as I see it, best regarded as an atomic value...
I guess, LISP-style "symbols" best reflects how I usually regard strings in
this case...
I say, the "pixel" class does not need to exist in the first place.
for most typical uses of "images", as I see it, it offers no real
advantage...
similarly, how one might use a "pixel" individually, will typically have no
real relation to the image it is contained within (and, in this case,
another representation, such as a float-vector) may make more sense.
for any DSP-type operations, it does not make much sense to operate on
"pixel objects", rather, most operations work on groups of pixels, typically
regarding them as parallel numerical data.
FWIW, typically we don't even need or care about discrete pixel values,
where for many tasks it may be the case that we identify a location on an
image via an ST coord or similar (namely, a floating-point coordinate), and
the value of the "pixel" at this location is synthesized via interpolation
or similar...
likewise goes for audio processing, ...
(all this is particularly true if one is structuring their tasks to be done
via the GPU and the use of shadering...).
I don't want to sound rude, but it just sounds to me that you are
basing your design on the limitations of the programming language and
then rationalizing that it's "better" that way.
I disagree with your statement. A "pixel" is a concept, and in OOP a
class is precisely what is used to describe a concept. You usually want
to abstract away the concept of "pixel" (because you don't want to eg.
fix the amount of color channels, bits-per-pixel, color channel
ordering, and so on, and instead you usually want to use an abstract
"pixel" concept where those details are hidden so that the outside code
won't depend on any single representation). There's certainly no harm in
defining a "pixel" as a class (well, not in C++ at least).
Besides, the "pixel" class was just a simple example. I'm sure that
you can think of other similar examples where humongous amounts of small
objects are needed. Things like rational or complex numbers in some
math-heavy application comes to mind as another example.
Why do you remove 70% of the context? I have myself suggested in this
thread to give more advanced MTP examples to show something not
possible in C and doable with C++ template. Many examples exist like
gcd, factorial or fft compile time computation or (advanced)
dimensional analysis. So my statement is relative to what template
brings to OOP comparing to C macros, not to the all possibilities of C+
+ template. Both have pros and cons and despite that C++ template is
Turing complete, it cannot replace the preprocessor. And AFAIK, the
printf example aforementioned relies on variadic templates which is
not (yet) part of C++.
> JUHA: There are many things doable with templates which are impossible
> to do in C with precompiler macros.
The opposite is also true. Does it bring something? I don't think so.
So I took this statement as part of the discussion.
> YOU: Do you _really_ know what is doable with C macros?
Yes, so what? Most of C/C++ programmer sees the preprocessor as a
primitive tool for string substitution. My remark was just there to
ensure that JUHA does not and see if I need to reevaluate its
position.
If you want to contribute, be positive and less arrogant. No need for
noise here.
a+, ld.
odd...
well, then again, I have not done a whole lot with templates...
In this case, I get his point. I've done DSP programming before, and
usually you have specialize libraries of highly-optimized vector math
routines that take advantage of the DSP's special architecture. This
routines usually only operate on arrays, and you almost always need to
speed boost that they give you. In my case, using the vector library
increase the number of signals we could concurrently process by a
factor of 12.
REH
no, I would do it the same in C++...
and, C has structs, which FWIW would do the same as a class in this case.
a simple reason is this:
Pixel *p;
p->r, p->g, p->b, p->a
ok...
now, what if I want to write a filter which filters the pixels...
if the same filter is to be used on EACH component, I either have to:
A, make the filter be specialized to "pixel" values;
B, use a switch to be able to use an integer to select "which" component I
want;
C, make a specialized copy of the filter loop for each component.
or, using a class, one would potentially need a method for damn near any
kind of per-pixel filtering operation, ...
> I disagree with your statement. A "pixel" is a concept, and in OOP a
> class is precisely what is used to describe a concept. You usually want
> to abstract away the concept of "pixel" (because you don't want to eg.
> fix the amount of color channels, bits-per-pixel, color channel
> ordering, and so on, and instead you usually want to use an abstract
> "pixel" concept where those details are hidden so that the outside code
> won't depend on any single representation). There's certainly no harm in
> defining a "pixel" as a class (well, not in C++ at least).
>
these are properties of the "image", not of individual pixels.
a pixel is simply a value point...
a pixel is, IMO, the wrong level of abstraction on which to use a class...
> Besides, the "pixel" class was just a simple example. I'm sure that
> you can think of other similar examples where humongous amounts of small
> objects are needed. Things like rational or complex numbers in some
> math-heavy application comes to mind as another example.
these don't really need "classes", but yes, they do need "some sort of
atomic unit" (such as a struct, or compiler built-in type).
however, this does not mean they need any of the "extended semantics"
classes offer, since for example, inheriting from a complex, ... makes
little real sense. in effect, associating a complex with a class shows an
issue of C++ (and certain mindsets of OOP), not of anything inherent in the
type itself.
well, I do know of an example:
CONS cells...
these have a bad habit of eating up ones' heap if one is not careful...
and, of the things listed, these might actually make sense as a class...
even then, not really, because the operations are typically external.
more recently, my usual implementation strategy has been to have a custom
heap for CONS cells...
I'm disagreeing with your statement that "each object
has at least one pointer pointing to it." Initially there
is a pointer pointing to the object, but then that pointer
is reused for another object. The objects are stored in
intrusive containers. So for the most part (except
initially) there aren't any pointers to the object around.
When you're ready to delete the object, you take the
address of it.
> Even if you use an intrusive container, you are using pointers to the
> objects, and they naturally take space.
Not pointers, but one pointer that points to each of the objects
for a little while.
Exactly how do you take the address of a dynamically allocated object
you don't have a pointer to? Care to show some code?
Rather obviously the intrusive container needs to store a pointer to
the object *somewhere*. It can't just drop the pointer and expect later
to be able to re-retrieve it by magic. That's just impossible.
>> Even if you use an intrusive container, you are using pointers to the
>> objects, and they naturally take space.
>
> Not pointers, but one pointer that points to each of the objects
> for a little while.
As long as the object lives, at least one pointer must point to it. If
no pointer pointed to the object, the object becomes unretrievable and
thus has been leaked (except by some garbage collection mechanism).
I'm not familiar with "intrusive containers", but if the object is
physically contained within the container (perhaps within an array),
the container wouldn't need to keep a pointer to it; it could
recompute the object's address at any time.
A somewhat trivial example:
struct array {
size_t count;
double data[MAX_COUNT];
};
A "struct array" object needn't maintain a pointer object that points
to the 42nd data element in order to compute its address.
--
Keith Thompson (The_Other_Keith) ks...@mib.org <http://www.ghoti.net/~kst>
Nokia
"We must do something. This is something. Therefore, we must do this."
-- Antony Jay and Jonathan Lynn, "Yes Minister"
This is non compiled code.
intrusive::list<myClass> lst;
myClass* ptr = new myClass(...);
lst.push_back(*ptr);
...
ptr = new myClass(...);
lst.push_back(*ptr);
...
intrusive::list<myClass>::iterator it = lst.begin();
intrusive::list<myClass>::iterator itend = lst.end();
for (; it != itend; ++it) {
delete &*it;
}
I'm not looking at a real example and just going by
memory, but that is close I think.
> Rather obviously the intrusive container needs to store a pointer to
> the object *somewhere*. It can't just drop the pointer and expect later
> to be able to re-retrieve it by magic. That's just impossible.
>
I don't think it's possible to have something like
intrusive::list<myClass*>. That won't compile.
> >> Even if you use an intrusive container, you are using pointers to the
> >> objects, and they naturally take space.
>
> > Not pointers, but one pointer that points to each of the objects
> > for a little while.
>
> As long as the object lives, at least one pointer must point to it. If
> no pointer pointed to the object, the object becomes unretrievable and
> thus has been leaked (except by some garbage collection mechanism).
The container can still give you access to the objects, but doesn't
have pointers to them.
> You can simulate object-oriented programming in C to an extent, but
> since the language has basically no support, it will inevitably be
> rather "hacky" and complicated.
>
> The gtk+ library for C is a good example of a C library which
> extensively uses OO techniques. However, the resulting code is
> necessarily uglier and less efficient than the equivalent C++ code would
> be. (For example, when using gtk+, every single pointer cast from one
> type to another, even when it's from a derived object type to a base
> object type, is done dynamically at runtime, with runtime checks.)
It's done using macros (specifically, G_TYPE_CHECK_INSTANCE_CAST); whether
this performs a run-time check or is simply a C cast depends upon whether
the macro G_DISABLE_CAST_CHECKS is defined.
Even with the checks enabled, it's more efficient than just about any real
OO language except for C++, without the penalty of C++'s slow compilation
and exceptional bug-hiding ability.
An intrusive container reduces copying by keeping the actual object
given to the container, as opposed to making a copy of the object
(i.e., owner of the object is given to the container). I believe
(though this just my assumption) that they are called intrusive
because the container requires that the object supply particular
members (e.g., such as next/previous pointers).
REH
I would like to see the Boost Intrusive containers
added to the standard, but in cases like intrusive::list,
I'd prefer the class be renamed to something like
ilist so as to make it easy for people to know
which list is being used. Originally the Boost author
proposed names like ilist, but some reviewers asked him
to change the names to be the same as those in the STL.
I registered my opinion, but there were more people
who felt otherwise. The class names are my only
objection to the library.
I guess that last sentence isn't correct given the hooks, but
I hope the example is helpful.
No.
> There are much heavier operations involved in
> creating and destroying objects dynamically than some individual counter
> integral.
I still think it's a worthwhile feature that will get
a lot of use and may be added to STL classes eventually.
In the year 20XX: The X-GPU will have a processor per-pixel...
[...]
and probably still not regard a pixels as an "object" (in the typical OOP
sense...).
actually, this reminds me of an idea I had a while ago, mostly for the
image-transforms for a hypothetical piece of hardware, where each "pixel"
would consist of:
a origin (XYZ, local space);
a normal vector (also local space);
an RGBA value for each pixel (or maybe RGBA-IUV, or a hexachromatic system).
the naive form of the transform would run in O(n^2), but an O(n log2 n)
version could also be imagined.
basically, the naive transform would do a big-ass calculation for the entire
input to calculate each output pixel. the optimization for the latter would
be to use a BSP to prune away most of the input when calculating each
output.
now, how did I imagine this transform?...
with 'pixel objects'?...
no, rather with big flat arrays...
in this case, both the input and output would be a sort of holographic
image, but wrapped around a dynamically reshapable "manifold".
in this case, the "manifold" would be a type of flexible display (probably
OLED) and with the surface using microlensing (each lens applying to a small
cluster of pixels, allowing for a number of angles to be managed per
"virtual pixel").
similarly, the display would periodically reverse the polarity of the
pixels, so that it could capture an image as well.
it was also considered that there would be purely optical ways to measure
the flex within the display (using a variant of the fiber-optic trick used
in many VR gloves...).
however, I will not state the likely "use" for such a device...
(if anyone can figure out the use of a flexible bi-directional holographic
display which transforms its input into its output...).
The pointers are there. They are part of 'myClass'. 'myClass' must have
a 'next' and 'previous' member.
>> Rather obviously the intrusive container needs to store a pointer to
>> the object *somewhere*. It can't just drop the pointer and expect later
>> to be able to re-retrieve it by magic. That's just impossible.
That's exactly true.
I disagree. He has absolutely identified one of the
core issues. Else-thread, you express wonderment that
many C-programmers are "prejudiced against C++", and here
you ignore one of the fundamental answers to the reason
that many people prefer C to C++. It has nothing
to do with "prejudice". It has to do with simplicity.
C is simpler than C++. Often, that is the overriding
factor in selecting C over C++ for a project.
May I express my opinion that C's "simplicity" is often a hindrance
rather than an aid in programming and, rather ironically, C's
"simplicity" actually makes the programs written with it more complex
than a (well-written) equivalent C++ program would be (even though C++
is a more complex language).
While simpler languages are generally preferred over more complex
languages, IMO C has the wrong type of "simplicity". Rather than making
programming easier, it makes it harder. The "simplicity" of C doesn't
help you express things in shorter and simpler ways, like is the case
with other, truely simple languages.
IMO the complexity of the language definition and its standard is
*not* a valid measure of how well that language can be used for large
projects (or, for that matters, projects of any size). In many cases the
complexity of C++ actually makes it easier to write simple and
straightforward programs.
In particular, making this functionality available with
vector and deque is important since there are no Boost
Intrusive counterparts available for those classes.
Someone asked about intrusive containers earlier...
Just go to boost.org and look for the Intrusive
library. There's also an intrusive_ptr library, but
that's not the same thing.
this is the wrong sense of "simplicity".
for many tasks, it is more important what goes on in the compiler and
linker, than what the user has to type...
C is "simpler" primarily in that, since it has a whole lot less "magic"
going on, compilers are easier to write, the compiler output is easier to
work with, ...
more so, C's simplicity allows projects to more easily write specialized
source-processing tools, where although a C parser is a hassle, a C++ parser
is far worse...
> While simpler languages are generally preferred over more complex
> languages, IMO C has the wrong type of "simplicity". Rather than making
> programming easier, it makes it harder. The "simplicity" of C doesn't
> help you express things in shorter and simpler ways, like is the case
> with other, truely simple languages.
>
and, in many cases, why should we care?...
the simplicity that C does offer, may well be an overriding factor in a
project, especially if this project involves programmatic handling of source
code, machine code, or both...
> IMO the complexity of the language definition and its standard is
> *not* a valid measure of how well that language can be used for large
> projects (or, for that matters, projects of any size). In many cases the
> complexity of C++ actually makes it easier to write simple and
> straightforward programs.
but, at what cost?...
we can easily see above the sorts of costs C++ has for tooling and
processing...
this is while ignoring many other issues:
integration with VMs and scripting languages;
integration between code developed and compiled independently (such as
system libraries);
...
as noted, most VMs do not integrate well with C++, but they do much better
with C...
similarly, the use of many of C++'s features create horrible binary-level
dependency issues, making C++ not really a good choice for, for example, the
public interface for a DLL...
"trivial" internal changes to a class or its implementation, ... could break
binary compatibility with code compiled against a different version of the
DLL, thus leading to far worse versioning issues than would otherwise be the
case... however the types of API-design practices usually used in DLLs
(which prohibit use of even many C features) can largely mitigate these
issues (allowing multiple versions of a DLL exporting the same interface, or
"equivalent" DLLs to be developed and maintained by different people).
take, for example, 'opengl32.dll', which, as a dll, has a number of versions
by a number of companies, yet GL is relatively free of DLL versioning
issues...
...
in many projects, these sorts of matters may well be far more important than
how nice the code looks or how many nifty features the runtime may
include...
granted, C++ may well be a good choice for frontend apps, but may not be as
ideal in many cases for systems-programming and library-backend tasks...
...
so, these being the main factors, C++ is my 3rd ranked language, after C and
ASM...
I may use C++ when I can be sure doing so will not have adverse
consequences...
(I have written a few of my libraries in C++, but it is a decided minority,
and in general I don't let it anywhere near the public API...).
or such...
Ah, but you don't have such a struct. You have the 42nd data element.
How, I don't know - if it's a copy, you can't 'delete' the 42nd data
element; if it's the element itself then the original question has to
be asked again - care to show some code? I.e. code where I have the
42nd element of data, not a copy, and not the address of it. Such that
the 42nd element of data can be deleted.
It does not make sense.
Phil
--
If GML was an infant, SGML is the bright youngster far exceeds
expectations and made its parents too proud, but XML is the
drug-addicted gang member who had committed his first murder
before he had sex, which was rape. -- Erik Naggum (1965-2009)
I may have missed the point by using an array of double. You can't
sensibly delete an element of an array.
The context is some kind of object-oriented framework based on C, so
presumably there are types that have something like destructors. If,
rather than array of double, you have an array of some destructible
type, then you can invoke the 42nd element's destructor by computing
its address, without having to have saved that address in a pointer
object.
Now Brian Wood did say "When you're ready to delete the object" rather
than "When you're ready to destroy the object", so, again, I may well
have missed the point.
And this whole thing is probably getting a bit far afield for
comp.lang.c, especially given the cross-post to comp.lang.c++.
As stated, the question asked was probably beyond the remit of
comp.lang.anything.real.
You are talking about pretty low-level programming, like C was just a
thin wrapper around assembly.
Most people don't need nor want to write program at such low level,
especially with bigger projects. This is both when programming for hobby
and professionally. It may mostly be relevant only when programming
things like device drives and OS kernels (and certain types of programs
for embedded devices), but little else.
C does have its uses, but it's a rather niche market, really.
Efficiency in itself is not a good reason to choose C over C++, as
it's perfectly possible to achieve the same level of efficiency in the
latter, all while using a higher level of abstraction.
>> While simpler languages are generally preferred over more complex
>> languages, IMO C has the wrong type of "simplicity". Rather than making
>> programming easier, it makes it harder. The "simplicity" of C doesn't
>> help you express things in shorter and simpler ways, like is the case
>> with other, truely simple languages.
>>
>
> and, in many cases, why should we care?...
Because in most cases you want simple, clear, easy-to-understand and
efficient programs which have a minimal amount of coding conventions and
metaparadigms which exist solely because of the limitations of the
programming language. (Coding conventions in general are ok, of course,
but if a convention exist solely to get around a limitation of the
language and nothing else, then it's only a hindrance and makes it
harder to learn how the program works and how it should be developed
further.)
Granted, C++ might not be the *best* language out there in order to
achieve this goal, but IMO it at least offers much better tools to get
closer to that goal than C does.
> the simplicity that C does offer, may well be an overriding factor in a
> project, especially if this project involves programmatic handling of source
> code, machine code, or both...
You'll have to admit that those types of projects are quite rare. If
you personally are involved in lots of such projects, then of course you
should choose the best tools for that purpose. However, you shouldn't
assume that all projects are like that nor that C would be the best tool
for those.
> in many projects, these sorts of matters may well be far more important than
> how nice the code looks or how many nifty features the runtime may
> include...
I have never been involved in such projects (even though I have been
writing C++ professionally for a decade). C is just not the tool for me.
> I did some measurements for COS (C Object System) vs C++ vs Objective-
> C. The results are described in the following papers in page 9 for
>
> http://cos.cvs.sourceforge.net/viewvc/cos/doc/cos_draft-dls09.pdf.gz
>
> and in page 14 for
>
> http://cos.cvs.sourceforge.net/viewvc/cos/doc/cos_draft-oopsla09.pdf.gz
most folks have decent bandwidth these days so online papers should
just be PDF and not zipped - gzip or otherwise
It is nice to see that Dynace is being upgraded and maintained.
< not sure why you cross-posted to comp.lang.c++ >
or when trying to work through deep-seated problems that exist within
computing...
one has to go down fairly far before they can effectively build up,
otherwise, one ends up with the tall monolithic towers of crap many VMs and
HLLs are known for...
> C does have its uses, but it's a rather niche market, really.
>
> Efficiency in itself is not a good reason to choose C over C++, as
> it's perfectly possible to achieve the same level of efficiency in the
> latter, all while using a higher level of abstraction.
>
who ever says I am talking about performance?...
going down to the level of C and ASM allows one to very effectively tap the
well of "turingness" and "von-neumanness" which exists at the lower levels,
but which has to a large degree faded by the time one gets to C and C++.
many VMs (and apps) then attempt to build a new layer of "turingness" over
the already faded backdrop, rather than going right to the source:
the CPU...
and then building on the same foundations as the rest of the technologies we
have come to take for granted...
>>> While simpler languages are generally preferred over more complex
>>> languages, IMO C has the wrong type of "simplicity". Rather than making
>>> programming easier, it makes it harder. The "simplicity" of C doesn't
>>> help you express things in shorter and simpler ways, like is the case
>>> with other, truely simple languages.
>>>
>>
>> and, in many cases, why should we care?...
>
> Because in most cases you want simple, clear, easy-to-understand and
> efficient programs which have a minimal amount of coding conventions and
> metaparadigms which exist solely because of the limitations of the
> programming language. (Coding conventions in general are ok, of course,
> but if a convention exist solely to get around a limitation of the
> language and nothing else, then it's only a hindrance and makes it
> harder to learn how the program works and how it should be developed
> further.)
>
why should we avoid having and following conventions?...
anyways, I typically restrict my conventions far more than what is allowed
by the language...
good rules make good code...
(much like, as they say, "good fences make good neighbors"...).
> Granted, C++ might not be the *best* language out there in order to
> achieve this goal, but IMO it at least offers much better tools to get
> closer to that goal than C does.
>
why is there this goal?...
AFAICT, the main advantage of programming languages is to reduce the amount
of typing and to provide some semblance of error checking. however, these
are not mandatory, and if a better way can be found, this may well be worth
trying...
>> the simplicity that C does offer, may well be an overriding factor in a
>> project, especially if this project involves programmatic handling of
>> source
>> code, machine code, or both...
>
> You'll have to admit that those types of projects are quite rare. If
> you personally are involved in lots of such projects, then of course you
> should choose the best tools for that purpose. However, you shouldn't
> assume that all projects are like that nor that C would be the best tool
> for those.
>
yes, "those" people can just go and use Java...
>> in many projects, these sorts of matters may well be far more important
>> than
>> how nice the code looks or how many nifty features the runtime may
>> include...
>
> I have never been involved in such projects (even though I have been
> writing C++ professionally for a decade). C is just not the tool for me.
I do a whole lot of things like compilation and custom code generation at
runtime...
there are many things which can be done very well at low levels of
abstraction, but which can only be crudely and inefficiently faked at higher
levels.
consider you were told you could no longer directly write in a language for
native code, but could only target a poorly written interpreter, itself
written in native code?...
capabilities are poor, and performance is poor, whereas operating at a level
at or below that of C, one can operate at the "full power" available to a
language like C, and integrate on C "at equal terms", meanwhile still
gaining new capabilities...
for example, the JVM and .NET try to compete with C and C++ and native
code...
do they do this by using pure interpreters written in C or C++?...
no, both frameworks derive much of their power from directly targetting the
HW (AKA: Just-In-Time compilation, or JIT).
however, IMO, neither has gone far enough, as neither has really "primed the
well to the watertable of power...".
or such...
not that it helps much anyways as most PDFs internally use deflate
anyways...
it is much like the RAR files within RAR files, containing a self-extracting
EXE, ... which one sometimes finds online...
> I disagree. He has absolutely identified one of the core issues.
> Else-thread, you express wonderment that many C-programmers are
> "prejudiced against C++", and here you ignore one of the fundamental
> answers to the reason that many people prefer C to C++. It has
> nothing to do with "prejudice". It has to do with simplicity. C is
> simpler than C++.
That's certainly true, but it may not be as relevant as you suggest.
The _library implementors_ surely need to be well acquainted with C++,
and writing good C++ template libraries can be complicated -- but for
the _average programmer_ writing an app, C++ can be very simple indeed,
maybe even _simpler_ to use than C because the library authors' hard
work gives them more robust abstractions that allow better error
checking and more concise code.
Moreover, C++ has the nice property that you can use practically any
subset you want, down to the point where it becomes simply C.
-Miles
--
Youth, n. The Period of Possibility, when Archimedes finds a fulcrum,
Cassandra has a following and seven cities compete for the honor of endowing a
living Homer.
like Mr. Juha there, you keep thinking of the "wrong" simplicity...
it is much the same as saying that English is simpler than programming...
after all, humans can much more easily understand English than most
programming languages.
yet, it does not take long to discover that this "simplicity" to English is
of almost no use to machines.
there is simplicity as in "easier", and there is simplicity as in "less
complexity" (AKA: minimalism).
be careful not to confuse them...
the advantages and disadvantages of each are not to be confused, nor are the
reasons for choosing one over another.
some of us may prefer the simplicity of notepad and command shells as
well...
There are many different scenarios, but I suspect for many of them, I'm
thinking of the "right" simplicity.
I trust my instincts far more than I trust yours, of course.
-Miles
--
Insurrection, n. An unsuccessful revolution.
this is a world of polynomial complexities, where one thing is abstracted at
the cost of another.
often one needs to go down well before they can go back up...
as can be said, the path to destruction is wide and it is easy...
and, not everything is just as it may seem...
much as a fly becomes trapped in a jar by flying towards the light.
but, yes, for "most" apps, the difference between C and C++ will not so much
matter, but for some apps, it does matter, and the choice of language may be
dictated by the standing situation.
as noted can be, there are different situations and different tradeoffs, and
what may be correct in one place need not be correct in another.
or such...
Exactly why should the average programmer care how difficult it is to
write a compiler for a certain programming language? That's the headache
of the compiler writers, not the average programmer. The compiler exists
to assist the programmer and make his life easier, not the other way around.
There may be certain situations where the complexity of the compiler
and the machine code it creates can be a burden, but you'll have to
admit that's a really small niche market. To the majority of programmers
that's completely irrelevant.
This point of view abstracts from the reality as it is in the real world.
Ideally yes, compilers are bug free.
Practically no, compilers are NOT bug free.
Look at the thread "Books for advanced C++ debugging". There, I ask for
literature about debugging C++ code bases. The unanimous answer was that
there isn't any actually. A deep search both in google and in Amazon.com
yields only beginners books.
So, you are in your own. You can be maintaining code that has worked for
years in different environments and suddenly breaks. Why? Because some
random compiler optimization decision decided that code that assumed
that you could treat two 32 bit pointers as a single 64 bit number
is no longer supported.
The pattern of failure is completely random, and only happens in a
special circumstance in only one version: 32 bit linux.
User code in complex languages is EXTREMELY fragile/brittle.
Of course you can tell me that I should compile with several compilers
to get a hold in the problem, what I did. I compiled with Open 64 AMD's
new compiler and it crashed... I filed a bug report to them.
I compiled too with the Intel C++ compiler and it crashed. Yes, I filed
a bug report. And those are compiler supported by huge companies with
dozens of man years of work behind them.
Then, I realized that we were stuck with gcc forever. Of course gcc is
not the best but it is the only one. There is NO other compiler for
C++ under a widely used operating system like Linux!
This means that C++ has grown so incredible complex that you are lucky
if you find a compiler that compiles your code without crashing, not to
mention that compiles your code without generating rubbish.
Ours is a very complex database of C++ code, full of templates,
overloaded functions and what have you... We even use the STL sometimes
:-)
And we are lucky that a compiler exists for Linux. If not we would have
to drop that platform.
Then we have the problem that most of the C++ compilers we use never
implement all of the huge language but a fairly big subset of it. We
have to avoid using parts of the language that aren't universally
implemented.
Etc etc... You know this problems as well as I do but you ignore them,
because in most "languages" discussion, there is no rational discussion
but an emotional throwing of "facts" from one to the other.
That is why I try to avoid them. I hope you will answer in a rational
way and not interpret this as a personal attack.
Thanks in advance for your attention.
jacob
The problem here is that it never was supported. Hardly a language
problem.
Bo Persson
> fft1976 wrote:
> > But OOP can be done in C.
>
> I think that's debatable.
Only by someone who thinks OOP means "OOP as done in C++, or other
well-known OOP languages". FYI, OOP is older than OOP languages, and in
fact much of what OOP formalises was considered good practice well
before someone coined the term.
> The gtk+ library for C is a good example of a C library
GTK+ is mainly a good example of a badly designed library.
Richard
> William Pursell wrote:
> > It has to do with simplicity.
> > C is simpler than C++. Often, that is the overriding
> > factor in selecting C over C++ for a project.
>
> May I express my opinion that C's "simplicity" is often a hindrance
> rather than an aid in programming and, rather ironically, C's
> "simplicity" actually makes the programs written with it more complex
> than a (well-written) equivalent C++ program would be (even though C++
> is a more complex language).
You can, but you cannot do that _and_ talk about "prejudiced C
programmers" in the same thread.
Richard
Are you saying that I'm just being prejudiced when I say that C has
the wrong kind of "simplicity" which causes programs written in C to be
overly complicated and verbose?
Well, if that's what you are saying, that's your prerogative. I trust
my own experience more.
Putting your post (your whole post, not just the quote above; I just
chose some representative paragraph to represent your whole post so that
the quoted part wouldn't be excessively large) in the context of this
thread:
I don't think you are seriously suggesting that I drop over a decade
of experience in programming in C++, both for hobby and professionally,
participated in a multitude of projects, basically never having
encountered any of the problems you mention, and switch back to C, just
because C compilers are easier to create and thus ostensibly more bug-free?
I'm sorry, but "some C++ compilers out there may be buggy" is
certainly not a very convincing argument for me to switch from C++ to C.
Yes, I know that was not your point, but as said, I'm just putting your
post in the context of the thread.
If someone really wants to use C, then by all means go ahead and make
your life harder than it has to be. That's just not for me. I'd even
prefer Java (heaven forbid) over C anytime.
I apologize for the rant.
Nope. By someone who thinks that yes, OOP can be done even in
assembly... if you stretch the definition enough.
I still prefer doing OOP in a language which has at least *some*
native support for it.
> William Pursell <bill.p...@gmail.com> writes:
>
> > It has nothing to do with "prejudice". It has to do with simplicity.
> > C is simpler than C++.
>
> That's certainly true, but it may not be as relevant as you suggest.
>
> The _library implementors_ surely need to be well acquainted with C++,
> and writing good C++ template libraries can be complicated -- but for
> the _average programmer_ writing an app, C++ can be very simple indeed,
> maybe even _simpler_ to use than C because the library authors' hard
> work gives them more robust abstractions that allow better error
> checking and more concise code.
That's only true if you never read anyone else's code, which uses the
80% you haven't.
Richard
> >> yet, it does not take long to discover that this "simplicity" to English is
> >> of almost no use to machines.
>
> > Exactly why should the average programmer care how difficult it is to
> > write a compiler for a certain programming language? That's the headache
> > of the compiler writers, not the average programmer. The compiler exists
> > to assist the programmer and make his life easier, not the other way around.
>
> > There may be certain situations where the complexity of the compiler
> > and the machine code it creates can be a burden, but you'll have to
> > admit that's a really small niche market. To the majority of programmers
> > that's completely irrelevant.
>
> This point of view abstracts from the reality as it is in the real world.
>
> Ideally yes, compilers are bug free.
>
> Practically no, compilers are NOT bug free.
>
> Look at the thread "Books for advanced C++ debugging". There, I ask for
> literature about debugging C++ code bases. The unanimous answer was that
> there isn't any actually. A deep search both in google and in Amazon.com
> yields only beginners books.
I've never seen a book entitled "Advanced <any language> Debugging".
Or reasonable variation thereof. That's not to say such things don't
exist but I do own a fair number of programming books and I've read
the back of a lot more. I have seen Windows Debugging books so they
may help you. Generally speaking debugging books are thin on the
ground.
Maybe because it varies so much (language/platform/tools).
<snip>
The irony of that statement is that the problem you had in the other
thread was purely a C problem; nothing in your problem was C++
related.
/Peter
too true. Too scarily true. You have to known most of the language,
including
the pitfalls, in order to do maintenance on a large C++ code base. I
might
like to confine my usage of templates to just the STL (at least to
start with)
but I don't have this choice if my predecessor was crazy about
templates.
With C I can get by with K&R. With C++ I neeed Stroustrup (or
equivalent),
Josouttis and about three of the "more effective" books. I pray no-one
lays their hands on Alexandrescu or finds Boost.
At least you *can* know the entire C++ language because it has been
standardized and tons of books have been written about it.
In a large C project there will inevitably be many coding conventions
and metaparadigms which exist for the sole reason of getting around the
limitations of the C programming language (eg. related to memory
management and safety, or to object-oriented programming), and which in
many cases would be unnecessary in C++ (if the C++ project has been well
designed).
Thus when you start reading C code written by someone else, you will
have to guess which coding conventions he was using (how many people
document their coding conventions, even in larger projects?), or else
you won't understand half of what's going on.
In the worst cases there will be "clever" tricks all over the place,
eg. related to using preprocessor macros to generate code, using opaque
pointers to simulate modularity (and this "opaqueness" can extend to the
person reading the code trying to understand what's going on), etc. In
many large C projects there will be a kind of "metalanguage" built on
top of C in order to get around the limitations of the language. In
order to understand the program you will need to understand this
metalanguage. This might not always be very easy because of how this
"metalanguage" has been constructed (ie. by abusing obscure preprocessor
macros, etc).
On the other, worse extreme, there will be no coding conventions, no
attempt at modularity, no "metalanguage", but just straightforward
spaghetti code which will be next to incomprehensible to anyone else
than the writer himself (and even to him for just a few months).
It's not like large C++ projects would not have coding conventions and
"metaparadigms", but the language helps keeping their amount at minimum,
especially when dealing with basic things like modularity, memory
management and safety. Thus there's less "extra stuff" to learn on a
per-project basis when you start studying some (well-written) C++ program.
As an example, if I need inside a function some fast ordered data
container for a few thousands of elements (and the container must be
local eg. because the function must be thread-safe), in C there will be
approximately as many solutions as there are programmers, each one
creating their own special data container written in C, and each one
using their own personal coding conventions in order to avoid memory
leaks and other memory-related problems. An outsider reading the code
will have to acquaint himself with that person's data structure and how
it's used and how that person uses it.
In C++, however, I would simply use a std::set or std::map in a
completely straightforward and simple way, with nothing special. Anybody
who knows how to use those data containers will be able to easily
understand the code. There are no hidden coding conventions or
metaparadigms in order to get around limitations in the language. It's
just plain and straightforward code, easy to understand, using a data
container which any experienced C++ programmer will be familiar with. No
such luck in C.
Thus the "simplicity" of C is mostly an illusion. This "simplicity"
only makes the code more complicated. C offers no tools which everyone
would be familiar with.
in "theory" yes, in practice no...
not even compiler writing is not a task which can be completely avoided.
after all, "someone" has to write and maintain the compiler.
no, maybe not "you", but none the less there are people who do so, and these
issues matter to "them"...
especially considering that the C++ compiler doesn't do a whole lot of
effort-saving thing (for example, automatically writing headers...). it does
not take too much thinking that an auto-header tool which works for
general-purpose C++ is FAR more effort than an auto-header tool which works
for C.
likewise goes for many other tasks which may involve processing source code:
building a metadata database (for example, for, info describing every
visible declaration, and the layout of every struct); ...
it would be nice if the compilers did every task we might want them to, but
in reality, they don't...
this may be a deciding factor for at least SOME efforts...
> There may be certain situations where the complexity of the compiler
> and the machine code it creates can be a burden, but you'll have to
> admit that's a really small niche market. To the majority of programmers
> that's completely irrelevant.
not all programmers, and not all cases, but none the less there are cases
where all this does matter...
for example, there are more than a few programmers who write code that goes
in DLL's...
(and for many programmers, a vast majority of their code may end up in
DLL's, with the frontend being a relatively small piece of machinery...).
in many cases, it may well just be that the simpler option is much less
prone to error (consider, back to DLLs and versioning issues...).
if we have C++ across a DLL boundary, the versioning issues are going to be
terrible, as nearly any change to the library risks breaking the API, and to
the compiler (as we find, for example, that version 5.0 and 7.1 of "some
compiler", produce code with slightly different name-mangling rules...). or,
even worse, that the DLL writer and client used different compilers, facing
themselves with a horrible issue:
almost no 2 C++ compilers can exactly agree upon the ABI...
MSVC C++ DLL + MinGW C++ app... no, the code will break...
and this is still ignoring changes to the library, where the implementor
might decide, of all things, to add a few methods to a class (this being a
problem as it will often change the in-memory layout of the vtable, ...).
none the less, there is a simple option, and a C++ API can still be
provided.
how? because in this case, the C++ portion of the API exists purely on the
client end.
so, the API itself is C-based, but on the client, there is a C++ header,
which wraps the API calls...
similarly, the library "can" be in C++, but provide an API via 'extern
"C"'...
now, if your argument WERE valid, there would be little reason to use
'extern "C"' on the API, none the less, this much is almost mandatory to
avoid the problems of fragile code...
actually, avoiding a fragile API requires far more restrictions than this...
in my case, if it is any consolation, in my case I am NOT advocating
dropping C++, only that there ARE cases where using it may not be the best
possible option...
as noted, I do include C++ in my list of used languages.
it is not the top language, but it is there...
and the main reason:
as noted, much of my code is very "low level", and also the majority of it
goes in DLL's...
I have just not really done much of the sort of coding where the features it
offers would outweigh its likely costs in these situations.
however, I do have several libraries which are, internally, written in C++,
but none the less, export a C-based API...
and, as it so happens, the client of this library, is written in C...
it makes sense to keep these sort of options open, so that one is free to
choose "the best tool for the job"...
C is standardized as well...
> In a large C project there will inevitably be many coding conventions
> and metaparadigms which exist for the sole reason of getting around the
> limitations of the C programming language (eg. related to memory
> management and safety, or to object-oriented programming), and which in
> many cases would be unnecessary in C++ (if the C++ project has been well
> designed).
>
C++ has essentially most of the same MM issues as C...
> Thus when you start reading C code written by someone else, you will
> have to guess which coding conventions he was using (how many people
> document their coding conventions, even in larger projects?), or else
> you won't understand half of what's going on.
>
in "most" projects, one can simply skim over the code and utilize the powers
of intuition to have an understanding of most of the codebase (doesn't scale
well, for example, GCC remains as still a terrible and incomprehensible
beast, especially those parts written in C++...).
after all, people can understand other people's English well enough, why not
code?...
after all, the complexity of English is typically far higher than that of
C...
> In the worst cases there will be "clever" tricks all over the place,
> eg. related to using preprocessor macros to generate code, using opaque
> pointers to simulate modularity (and this "opaqueness" can extend to the
> person reading the code trying to understand what's going on), etc. In
> many large C projects there will be a kind of "metalanguage" built on
> top of C in order to get around the limitations of the language. In
> order to understand the program you will need to understand this
> metalanguage. This might not always be very easy because of how this
> "metalanguage" has been constructed (ie. by abusing obscure preprocessor
> macros, etc).
>
granted, "clever tricks" are evil...
IMO, it is a good idea to specify and document nearly every relevant API,
and to structure the project in terms of a number of interconnecting
API's...
one can then write write their code according to these APIs...
programmers can then be reprimanded for any "unjustified" variation from
these APIs and conventions. non-obvious macros should also be avoided in
most cases, and if used macros should always have their name in all caps,
...
so, it is about rules and conventions.
much like how we also have things like morality and laws...
after all, there are rules governing things like morals, and should a person
violate them, then we can regard them as depraved... (I will not be
specific, not intending to get into an argument over morals right now).
but, yeah, this is the general sense of the matter, similar sorts of
issues...
> On the other, worse extreme, there will be no coding conventions, no
> attempt at modularity, no "metalanguage", but just straightforward
> spaghetti code which will be next to incomprehensible to anyone else
> than the writer himself (and even to him for just a few months).
>
yes, granted, this is worse...
> It's not like large C++ projects would not have coding conventions and
> "metaparadigms", but the language helps keeping their amount at minimum,
> especially when dealing with basic things like modularity, memory
> management and safety. Thus there's less "extra stuff" to learn on a
> per-project basis when you start studying some (well-written) C++ program.
>
to be "well written" itself requires adherence to rules, otherwise nothing
stops C++ code from being just as horrid, or worse, than most C code...
the existence of rules is not the violation, and C++ is not a validation for
coding anarchy...
> As an example, if I need inside a function some fast ordered data
> container for a few thousands of elements (and the container must be
> local eg. because the function must be thread-safe), in C there will be
> approximately as many solutions as there are programmers, each one
> creating their own special data container written in C, and each one
> using their own personal coding conventions in order to avoid memory
> leaks and other memory-related problems. An outsider reading the code
> will have to acquaint himself with that person's data structure and how
> it's used and how that person uses it.
>
that, or, one sticks with simple options (such as flat arrays), and then
formally documents their decision (off in a text file somewhere).
nevermind that one may soon find that their project has 10s of MB of such
documentation...
> In C++, however, I would simply use a std::set or std::map in a
> completely straightforward and simple way, with nothing special. Anybody
> who knows how to use those data containers will be able to easily
> understand the code. There are no hidden coding conventions or
> metaparadigms in order to get around limitations in the language. It's
> just plain and straightforward code, easy to understand, using a data
> container which any experienced C++ programmer will be familiar with. No
> such luck in C.
>
it is not nearly so bad as portrayed...
most of us have better things to do than get caught up in matters of
container management...
> Thus the "simplicity" of C is mostly an illusion. This "simplicity"
> only makes the code more complicated. C offers no tools which everyone
> would be familiar with.
no, the simplicity is not an illusion, just not the "simplicity" you want it
to be...
the simplicity of C:
a small and finite number of syntactic forms;
well defined rules for how these forms are parsed, what various
constructions and functions do, ...
what happens beyond this, well this is an entirely different matter...
after all, it is the same sort of simplicity which would lead MS to make a
version of Windows with nothing but the CMD shell and Notepad...
after all, you want to do something?
well, there are shell commands for it...
DOS is a simpler OS than Windows...
similar, MinGW+makefiles is simpler than using Visual Studio and its
complicated functionality, ...