It's been proven in the field that C can be used to do object oriented
programming (e.g. by the GTK+ library).
But can C be used to follow a functional programming paradigm?
Cheers,
PB
Up to a point. Higher-order functions can be supported by using
function pointers, and I think it's possible to hack something that
acts like a closure. It's not the most natural of fits, however.
(bleh... GTK+ is horrid...).
maybe one can define OO more in terms of the philosophy, rather than in
terms of faking class/instance via structs and pointer casting, and this
would be better...
there are some ways to do OO in C which are a bit cleaner, even if they have
a little less directly in common with mainstream "OOP" languages.
> But can C be used to follow a functional programming paradigm?
>
partly...
some aspects of FP style map over fairly well.
taking this too far may risk performance impact on some compilers, as some
compilers may turn out to be poorly adapted to optimizing this sort of code.
functional languages also have some features (such as lexical scoping and
closures) which are difficult to approximate well in standard C.
of note, GCC has both some compiler optimizations as well as some extensions
which may be helpful in using a functional style.
> Cheers,
> PB
Do you mean strictly conforming ISO C, or with some liberties taken to do
things like garbage collection (highly feasible in practice?)
You can write some C without assignments, passing around scalars
and even tuples, in the form of aggregates, by value.
The paradigm doesn't extend well to dynamically allocated objects
with reference semantics, where you run into the storage reclamation
problem.
I make use of the functional paradigm in this program:
http://savannah.nongnu.org/projects/txr
But in creating the special de-facto dialect of C for this program,
there are techniques used that are outside of the ISO C standard,
like scanning the stack for root references to reachable objects,
and encoding tags in spare bits in a pointer.
I.e. C /compilers/ can give us a functional paradigm, which is
a reasonable facsimile for C giving us one.
This is one of those things where if you try to doggedly adhere to
strictly conforming C, you will simply not get anywhere.
An example, from the above, of a little functional computation.
This maximizes the value of robust_length(cdr(e)) over elements
e of the input list bind_cp, where zero is injected as an initial
element into the fold:
/* match.c, line 760-something */
val max_depth = reduce_left(func_n2(max2),
bind_cp, zero,
chain(list(func_n1(cdr),
func_n1(robust_length),
nao)));
In C you end up implementing OO by emulating the machinery of vtables,
inheritance, polymorphism, etc. with coding conventions and invented
headers and libraries. Similary you can implement functional
programming machinery. A lispish framework is a reasonable place to
start: garbage collected strings, numbers, and tuples. The rest
follows naturally. For a readable example, see
http://www.hs-augsburg.de/~hun/www.t3x.org/bits/s9fes/. Though this
is a full Scheme implementation and not highly optimized, it's rife
with wonderful ideas. You can forget the Scheme parser and program
lisp-like in C to the framework that's left. For example, the author
does this to implement the bootstrap library of the core interpreter,
providing bignums and bigfloats for good measure.
> It's been proven in the field that C can be used to do object oriented
> programming (e.g. by the GTK+ library).
as is X-Window and Win32
It can do neither, for functional and OO programming refer to the use
of notations based on awareness of functional and OO programming. The
only way that C can "do" functional and OO programming is the way in
which a Turing machine simulates Windows.
It's not the best that C can do...
> But can C be used to follow a functional programming paradigm?
Yes, I use it every day. You may be interested by
http://cos.cvs.sourceforge.net/viewvc/cos/doc/cos-draft-dls09.pdf
The paragraph on Closures at the end could be simplified:
Counter example:
// create the closure
OBJ cnt = gaddTo(aCounter(seed), __1); // __1 is a placeholder
// evaluate the closure, i.e. geval == Lisp funcall
for (int i=0; i<25000000; i++)
geval(cnt, aInt(2)); // __1 is matching aInt(2)
Derivative example:
OBJ gradient(OBJ f)
{
OBJ f_x = geval(f, __1);
OBJ f_xdx = geval(f, gadd(__1, __2));
return gdiv(gsub(f_xdx, f_x), __2);
}
OBJ derivative(OBJ f, OBJ dx)
{
return geval(gradient(f), __1, dx);
}
cheers,
ld.
>
> (bleh... GTK+ is horrid...).
>
>
> maybe one can define OO more in terms of the philosophy, rather than
> in terms of faking class/instance via structs and pointer casting,
> and this would be better...
>
> there are some ways to do OO in C which are a bit cleaner, even if
> they have a little less directly in common with mainstream "OOP"
> languages.
>
Why do you think Gtk+ is horrid? Please provide some example of "a bit
cleaner" :)
> Why do you think Gtk+ is horrid? Please provide some example of "a bit
> cleaner" :)
You've used it, yeah? The maze of macros to do type casting and
inheritance make dealing with GTK one of the more miserable aspects of
my life.
B.
Ok, I thought he was referring to GObject in general but I see your
point.
What do you mean?
--
I was trying to get a reaction, but have you looked at Win32? You
register window classes (they call 'em classes) then instantiate
instances of the class. A windows dispatch function handles messages
sent to the windows class instance (object). Sounds pretty OO to me. X-
Window does similar stuff.
> I was trying to get a reaction, but have you looked at Win32? You
> register window classes (they call 'em classes) then instantiate
> instances of the class. A windows dispatch function handles messages
> sent to the windows class instance (object). Sounds pretty OO to me. X-
> Window does similar stuff.
Xlib doesn't do any OO; that's left to GUI toolkits (Xt, GTK, Qt).
OTOH, Win32 window class aren't particularly OO, e.g. there's no
inheritance.
yep.
dunno about GObject "in general" (not much experience with glib much beyond
its general refusal to work well on Windows IME...), but did have some past
experience with GTK.
I personally like to keep any such hackery behind API lines.
one could argue I guess that the GTK design allows a slightly more efficient
interface than doing basic checks/casts within the API or within the method
handlers, since the frontend presumably knows the types "more exactly" (and
so may avoid any checks), but then again, I have a difficult seeing how in
this case the slight loss in efficiency can justify the terribleness of the
API.
another detail:
in my designs (typically when doing "OO-ish" stuff in plain C), usually the
root vtable will contain all of the methods for pretty much all of the
object sub-types. in this case, usually any unused methods are NULL in
sub-classes for which they are unused (the API wrappers will usually check
that the method is not NULL before calling it, and then return a error code
of some sort for missing methods).
this is not usually the case for data, where usually there will be some
shared data (part of the toplevel "class"), and specific "sub-classes" will
usually have a region of data which is specific to themselves.
in either case, usually getters/setters are used for accessing any fields,
...
as for more "generic" OO stuff (as in a generic system to manage an object
system), well then, this gets a bit more complicated (and, in general, for C
code I am not sure it is necessarily a net good tradeoff, at least IME). the
main good point is that the system can be a little more flexible and is
easier to make reflective.
the downside is that often a system like this is more effort than doing so
manually...
Actually, I think you could easily construct a reasonable (and reasoned)
argument that there are OO constructs within the design and implementation
of X itself. If you can't see objects in Xlib, I suspect you aren't
looking at it closely enough.
What I think you mean is that it doesn't easily map to include all of
the features of what is now accepted as OO within the mainstream (in
terms of (full) data/method encapsulation, inheritance, overloading,
etc.). Then again, a lot has changed in the world of OO between X's
design and now; OO languages from the 60s and 70s would struggle to be
recognised as such if measured against the same (strict) yardstick.
I'll also say that one of the professors at work (who worked with Dahl
and Bygaard in the late '60s/early '70s) didn't have too much time for
mainstream OO languages - he felt they all copied the same, limited and
brain-damaged implementation of OO leaving it not only inflexible, but
also unnecessarily complex. C++ took a lot of the criticism for leading
OO down a dead-end.
> dunno about GObject "in general" (not much experience with glib much beyond
> its general refusal to work well on Windows IME...), but did have some past
> experience with GTK.
> I personally like to keep any such hackery behind API lines.
>
> one could argue I guess that the GTK design allows a slightly more efficient
> interface than doing basic checks/casts within the API or within the method
> handlers, since the frontend presumably knows the types "more exactly" (and
> so may avoid any checks), but then again, I have a difficult seeing how in
> this case the slight loss in efficiency can justify the terribleness of the
> API.
Perhaps you should explain exactly what's so terrible about the API?
Personally, I don't consider having to perform explicit up-casts to be an
unbearable burden (down-casts should be explicit anyhow).
> another detail:
> in my designs (typically when doing "OO-ish" stuff in plain C), usually the
> root vtable will contain all of the methods for pretty much all of the
> object sub-types. in this case, usually any unused methods are NULL in
> sub-classes for which they are unused
There's no need for that. Neither GTK+ nor Xt work that way; each class
has class and instance structures which extend the corresponding parent
structures.
>>> I was trying to get a reaction, but have you looked at Win32? You
>>> register window classes (they call 'em classes) then instantiate
>>> instances of the class. A windows dispatch function handles messages
>>> sent to the windows class instance (object). Sounds pretty OO to me. X-
>>> Window does similar stuff.
>>
>> Xlib doesn't do any OO; that's left to GUI toolkits (Xt, GTK, Qt).
>
> Actually, I think you could easily construct a reasonable (and reasoned)
> argument that there are OO constructs within the design and implementation
> of X itself. If you can't see objects in Xlib, I suspect you aren't
> looking at it closely enough.
If you define OO that loosely, you would be hard pressed to find something
which *doesn't* fit the definition.
Unfortunately, it's trivial to find huge amounts of code that couldn't
possibly fit that definition.
having to use cast-macros itself is ugly, FWIW...
it also makes it a little harder to mechanically generate code targetting
GTK IME (the code-writing logic has to be specialized for GTK and possibly
the task at hand).
...
>> another detail:
>> in my designs (typically when doing "OO-ish" stuff in plain C), usually
>> the
>> root vtable will contain all of the methods for pretty much all of the
>> object sub-types. in this case, usually any unused methods are NULL in
>> sub-classes for which they are unused
>
> There's no need for that. Neither GTK+ nor Xt work that way; each class
> has class and instance structures which extend the corresponding parent
> structures.
>
yes, that is how they work.
this works, granted...
I, OTOH, tend to use a very different strategy, both for object
representation as well as for API design.
my design strategy doesn't tend to be particularly C++/Java style
class/instance, but has also been influenced by things like CLOS and Self.
but, anyways, often partly flattening the class-tree actually saves some
effort (IOW: it is a fairly convinient strategy), as really common stuff can
be much more readily shared. this way, the particular subclasses can instead
focus on providing different behavior, rather than worrying lots about
providing a different structural representation.
I have actually used one (domain-specific) language that itself took this to
an extreme:
pretty much all object types (all their data, methods, ...) used a single
massive structure-type (produced by the compiler), typically differing only
in which fields or methods were actually used by a given object or "class"
(unused methods were NULL).
so, it is not unworkable (actually, in a C based implementation, may
actually be more convinient IME), but granted, if done poorly, can waste
additional memory.
the Linux kernel tends to use kind of a hybrid strategy in a few places (the
"subclasses" could be regarded as having been union'ed into the
"superclass"), which is sort of the reverse of the parent-inclusion route
(it leads also to the root object having the size of the largest
"subclass").
...
but, as I see it, "OO" is not really about how ones' "classes" are laid out
in memory...
Therefore, any paradigm can be followed in pure machine code. It's just
a matter of how easy it is to implement which is what differentiates the
languages.
> All code in all languages end up as machine code in the end.
>
> Therefore, any paradigm can be followed in pure machine code. It's just
> a matter of how easy it is to implement which is what differentiates the
> languages.
It's not just a matter of how easy, some paradigms simply can't be
implemented in standard C.
--
Ian Collins
Not necessarily, although I suppose it depends on how you look at it.
Consider an interpreter (in what one might call the classic sense of the
word): it never actually translates the statements it reads into "new"
machine code, but instead uses its existing machine code to perform the
required actions. For example:
if(strcmp(listing[currline].code, "LIST") == 0)
{
for(i = 0; i < maxline; i++)
{
printf("%d: %s\n", listing[i].lineno, listing[i].code);
}
}
This doesn't generate any new machine code for the LIST statement.
Instead, it takes advantage of existing machine code to perform the
necessary functionality.
On the other hand, it is machine code that executes the actual
requirement, so in a sense you are right.
<snip>
--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
"Usenet is a strange place" - dmr 29 July 1999
Sig line vacant - apply within
>> It's been proven in the field that C can be used to do object oriented
>> programming (e.g. by the GTK+ library).
>>
>> But can C be used to follow a functional programming paradigm?
>
> All code in all languages end up as machine code in the end.
>
> Therefore, any paradigm can be followed in pure machine code.
Compilation typically preserves semantics, but not necessarily the
"paradigm".
Just because the source code follows an OO/functional/whatever paradigm,
it doesn't mean that the same can be said of the resulting object code.
E.g. the functional paradigm doesn't admit the concept of mutable state,
but the resulting object code probably will use mutable state (unless it's
a very naive compiler).
All code eventually ends up as machine code, even if it goes through
multiple passes to get there. In this transition to a lower level
state, the code loses information, namely the structure and information
that was 'encoded' in the programming languages syntax and grammar.
Meta-code so to speak, information which doesn't get captured in the
machine code, but is used by the programmer. So the 'object' exists in
C++, but not in the machine code.
To reverse the process, you have to re-encode what was lost during the
transition to a lower level language, or at least approximate it. Not
just disassemble and approximate that code in a higher level language,
but arrange that code in a manner which mirrors the concepts and
structures originally used. Because that information is lost, that may
not be possible to do. That information may be irretrievably lost, just
like a pool of water has irretrievably lost the shape of the ice
sculpture it once was.
> All code eventually ends up as machine code, even if it goes through
> multiple passes to get there.
I'm with others who don't really agree with this for interpreted code.
Sure, all code leads to machine code being executed, but does that
really mean anything of any importance? After all, it all ends up as
micro code and as values on pins on ICs.
Is JVM code machine code? Would it be so if I built a machine that
executed it as the native machine code? Would it still be so if that
machine internally and in a non-accessible way converted it to Z80
assembler as a micro code? What about LISP machines?
--
Online waterways route planner | http://canalplan.eu
Plan trips, see photos, check facilities | http://canalplan.org.uk
Just because you repeat it doesn't make it true.
Consider an interpreter given different input programs A and B. After
they've done any pre-processing, be that just parsing, or p-coding,
then for one of them you have the interpreter and data representing A,
and for the other you have the interpreter and data representing B.
The machine code in the first example is that of the interpreter, and in
the second example it is that of the interpreter. The representations
of A and B are not machine code. Just because in order to complete the
execution you require the machine code of the interpreter does not
mean that the source programs have become machine code.
Phil
--
Any true emperor never needs to wear clothes. -- Devany on r.a.s.f1
You use an interesting expression; "...have become...". What does it
mean exactly, for a program to "become" something else? Is it
transformed by means of its own workings? Is it manipulated by forces
beyond its limited rules of comprehension; the Programmer, or the
Compiler, or the Interpreter? Surely, for the Interpreter to transmute
the Programmer's word into actions, certain "machine code" has to be
executed, does it not? One could say, with much certainty, that the
original program has "become" a much different series of instructions,
mandated now not only by the Programmer, but also the Interpreter (or
Virtual Machine, or whatever we decide to call It, at this point), and
they would be right, because the loose definition of the expression
"...have become..." with regard to programs certainly leaves a lot of
room for bullshit. Also note, that one could posit the following:
given a program in an interpreted language, A, and its interpreter, B,
which when executing A will produce a series of instructions, C, it is
practically possible to duplicate the semantics of program A ran on
interpreter B by executing the series C and that alone - I am not
trying to advocate this position (at first sight it looks a little
problematic too, but I haven't had coffee yet), but I could see how
someone would say that A "has become" C.
> > >>> All code in all languages end up as machine code in the end.
> > Consider an interpreter [...] Just because in order to complete the
> > execution you require the machine code of the interpreter does not
> > mean that the source programs have become machine code.
>
> You use an interesting expression; "...have become...". What does it
> mean exactly, for a program to "become" something else? Is it
> transformed by means of its own workings? Is it manipulated by forces
> beyond its limited rules of comprehension; the Programmer, or the
> Compiler, or the Interpreter? Surely, for the Interpreter to transmute
> the Programmer's word into actions, certain "machine code" has to be
> executed, does it not? One could say, with much certainty, that the
> original program has "become" a much different series of instructions,
> mandated now not only by the Programmer, but also the Interpreter (or
> Virtual Machine, or whatever we decide to call It, at this point), and
> they would be right, because the loose definition of the expression
> "...have become..." with regard to programs certainly leaves a lot of
> room for bullshit. Also note, that one could posit the following:
> given a program in an interpreted language, A, and its interpreter, B,
> which when executing A will produce a series of instructions, C, it is
> practically possible to duplicate the semantics of program A ran on
> interpreter B by executing the series C and that alone - I am not
> trying to advocate this position (at first sight it looks a little
> problematic too, but I haven't had coffee yet), but I could see how
> someone would say that A "has become" C.
what if I never compile or interpret the program but "execute" it
using pencil and paper?
What about the code i never compile, does that end up as machine code?
Obviously not; what is your point?
Sidenote: I thought this discussion involved computers, not pencils or
drums (revolving or whatnot). As I said, I won't advocate the
aforementioned statement, but it seems rather childish of you to
invoke reductio ad absurdum when it's obvious that Man is referring to
the process of writing, compiling, and executing code on a computer,
or at least similar constructs.. (embedded devices blah blah blah)
The following program:
---
#include <cos/cpp/hanoi.h>
COS_PP_HANOI(10)
---
where hanoi.h can be taken there
http://cos.cvs.sourceforge.net/viewvc/cos/CosBase/include/cos/cpp/hanoi.h?view=markup
running
gcc -std=c99 -I. -E hanoi.c
will display the moves solution of the Hanoi Tower for 10 disks. No
compilation, no byte code, just an interpreter (cpp) and a result
displayed ;-) Meta programming (which is code!) does not necessarily
ends in machine code, part of the code is just there to provide
constraints, ensure correctness, make sanity checks, make compile-time
computation, ... and vanish after the compilation phases.
regards,
ld.
Maybe you missed the part where I wrote this: "Surely, for the
Interpreter to transmute
the Programmer's word into actions, certain "machine code" has to be
executed, does it not?". In this case, cpp's code which will
interpret, evaluate, optimize, etc. the above program. Correct? Didn't
your program cause the execution of (some) code? Sure, the code you
wrote "COS_PP_HANOI(10)" didn't appear anywhere, in any directly
observable form. But you saw results. What caused those? Magic?
> Meta programming (which is code!) does not necessarily
> ends in machine code, part of the code is just there to provide
> constraints, ensure correctness, make sanity checks, make compile-time
> computation, ... and vanish after the compilation phases.
OK, right now I'm actually led to think you guys are being
intentionally dense just to mock me, so I'll stop.
<snip>
> what if I never compile or interpret the program but "execute" it
> using pencil and paper?
You are interpreting it in your head, and it is executed via your own
unique and individual dialect of machine code, the machine in this case
being your brain. (The pencil and paper are merely low-power-consumption
peripheral devices.)
> What about the code i never compile, does that end up as machine code?
Yes, because someone else will compile it - eventually. (Possibly an
archaeologist, in the year 20000010.)
That's a good point. Even if two programs consist of different machine code,
the processor machinery that executes them will be the same. This seems to
be the case at any level, where different programs are just different
bunches of data for the 'hardware' that operates on them:
'Program': 'Data':
application user-input
interpreter bytecode (etc)
processor machine-code
register-transfer-logic microcode
And of course pretty much any HLL can emulate the workings of hardware (just
to confuse matters a little)
> What about LISP machines?
Apart from LISP.
--
Bartc
> And of course pretty much any HLL can emulate the workings of hardware (just
> to confuse matters a little)
>
> > What about LISP machines?
>
> Apart from LISP.
http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-30.html#%_chap_5