Recently, I researched using C++ for game programming and here is what
I found:
C++ game developers spend a lot of their time debugging corrupted
memory. Few, if any, compilers offer completely safe modes.
Unsurprisingly, there is a very high failure rate among projects using
C++ for modern game development.
You can not even change function definitions while the program is
running and see the effects live (the ultimate debugging tool).
Alternatively, you can't execute a small portion of the program
without compiling and linking the whole thing, then bringing your game
into a specific state where your portion of the code is being executed.
The static type system locks you into a certain design, and you can't
*test* new ideas, when they come to you, without redesigning your
whole class hierarchy.
C++ is so inflexible, even those who do use it for games, have to
write their game logic in some other language (usually very slow,
inexpressive and still garbage collected). They also have to interface
the two languages.
C++ lacks higher-order functions. Function objects emulate them
poorly, are slow and a pain to use. Additionally, C++ type system does
not work well with function objects.
C++ programs can not "think" of new code at run-time, and plug that
new code into themselves in compiled form. Not easily, anyway.
C++ coding feels very repetitive, for example, when writing class
accessors, you often have to write const and non-const methods with
completely identical function bodies. Just look at STL.
When programming in C++ you feel like a blind person trying to draw
something. You don't _see_ the data structures that your procedures
will operate on. Lisp programming is much more visual.
Constructors and smart pointers make it hard to tell cheap operations
from expensive ones.
C++ lacks automatic memory management and so it encourages copying
objects around to make manual memory management manageable.
Reference-counting schemes are usually slower than modern garbage
collectors and also less general.
Most important, C++ syntax is irregular, and you often find yourself
typing repetitive patterns again and again - a task easily automated
in languages with simpler syntax. There are even books on C++
patterns, and some C++ experts take pride in being able to execute
those patterns with computer-like precision - something a computer
should be doing to begin with.
C++ programs are slow: even though the compilers are good at
micro-optimizing the code, programmers waste their time writing
repetitive patterns in C++ and debugging memory corruption instead of
looking for better algorithms that are far more important for speed
than silly micro-optimizations.
It's hard to find good programmers for C++ projects, because most of
the good programmers graduated to languages like Lisp or avoided C++
altogether. C++ attracts unimaginative fellows with herd mentality.
For creative projects, you want to avoid them like a plague.
It is my opinion that all of the above makes C++ a very bad choice for
commercial game development.
After that, no-one ever used lisp again...
..except the Jax & Daxter developers. Their game engine runs on an
interpretted lisp platform (I believe) and has spawned some of the most
impressive platformers I've ever seen...
So the moral is....
I don't know, but I won't be switching to Lisp any time soon...
Maybe its good once you get the hang of it...
But I think it may be too recursive & bottom-up programming for most brains
to want to deal with...
"Neo-LISPer" <neo_l...@yahoo.com> wrote in message
news:87k6tf9...@yahoo.com...
> Some senseless stuff
Besides your poor trolling attempts (most of your arguments just show that
you have no knowledge of C++) you don't even tell us what the alternative
to C++ in game programming could be.
--
To get my real email adress, remove the two onkas
--
Hendrik Belitz
- Abort, Retry, Fthagn? -
"Hendrik Belitz" <honkaonk...@fz-juelich.de> wrote in message
news:2u49l9F...@uni-berlin.de...
AKA Retarded mode.
> Unsurprisingly, there is a very high failure rate among projects using
> C++ for modern game development.
There's a 90% failure rate for lions when hunting. They still eat.
I would presume that that "very high failure rate" becomes a bit lower when
you're dealing with proficient C++ programmers.
> You can not even change function definitions while the program is
> running and see the effects live (the ultimate debugging tool).
Nothing to do with the language. Such a debugging tool could be developed,
why not develop it?
I myself wouldn't use it.
> Alternatively, you can't execute a small portion of the program
> without compiling and linking the whole thing, then bringing your game
> into a specific state where your portion of the code is being executed.
That's because there's no such thing as "half a program". If you really want
this, copy-paste it to another file and just append:
int main(){}
to the end of it.
> The static type system locks you into a certain design, and you can't
> *test* new ideas, when they come to you, without redesigning your
> whole class hierarchy.
Bullshit. Vague bullshit.
> C++ is so inflexible, even those who do use it for games, have to
> write their game logic in some other language (usually very slow,
> inexpressive and still garbage collected). They also have to interface
> the two languages.
Provide an example. I myself forsee no reason or motive to do or have to do
this.
> C++ lacks higher-order functions. Function objects emulate them
> poorly, are slow and a pain to use. Additionally, C++ type system does
> not work well with function objects.
"function objects". Get over it! It's just syntatic sugar!
> C++ programs can not "think" of new code at run-time, and plug that
> new code into themselves in compiled form. Not easily, anyway.
"think of new code at run-time". That's because it takes intelligence to
write code, something which computers lack. As for the code coming from
somewhere else, well it's done extremely easily actually - we call it
dynamic linkage.
> C++ coding feels very repetitive, for example, when writing class
> accessors, you often have to write const and non-const methods with
> completely identical function bodies. Just look at STL.
Incorrect.
If both function bodies are identical, then there's no need to write a non-
const version.
If there exists both a const version and a non-const version, then this
indicates that one version alters the object, while the other doesn't.
Conclusion: different code.
You could also make the non-const version call the const version, and then
just do something extra.
> When programming in C++ you feel like a blind person trying to draw
> something. You don't _see_ the data structures that your procedures
> will operate on. Lisp programming is much more visual.
"procedures"? Never heard of them. I've heard of "functions" alright. I must
say I don't... see... your argument, no pun intended.
If you have a function which takes in an object of a certain class, or as
you call it "data structure", then... (actually, it's so simple I'm not even
going to finish this paragraph).
> Constructors and smart pointers make it hard to tell cheap operations
> from expensive ones.
Bullshit. Vague bullshit.
> C++ lacks automatic memory management and so it encourages copying
> objects around to make manual memory management manageable.
int auto k = 4;
int* auto p_w = new int(4);
> Reference-counting schemes are usually slower than modern garbage
> collectors and also less general.
Which "garbage collector"? "less general" = vague bullshit.
> Most important, C++ syntax is irregular, and you often find yourself
> typing repetitive patterns again and again - a task easily automated
> in languages with simpler syntax.
I don't see your argument. I've never encountered such.
> There are even books on C++
> patterns, and some C++ experts take pride in being able to execute
> those patterns with computer-like precision - something a computer
> should be doing to begin with.
There's books on a lot of things.
> C++ programs are slow: even though the compilers are good at
> micro-optimizing the code, programmers waste their time writing
> repetitive patterns in C++ and debugging memory corruption instead of
> looking for better algorithms that are far more important for speed
> than silly micro-optimizations.
Define "programmers". I myself don't fit into the inuendo of a definition in
the above.
> It's hard to find good programmers for C++ projects, because most of
> the good programmers graduated to languages like Lisp or avoided C++
> altogether. C++ attracts unimaginative fellows with herd mentality.
> For creative projects, you want to avoid them like a plague.
MS-DOS was written in C++. Window XP was written in C++. Linux was written
in C++.
Come to think of it, what *wasn't* written in C++?
> It is my opinion that all of the above makes C++ a very bad choice for
> commercial game development.
My opinion differs.
-JKop
Linux comes to mind.
Really? What was it written in?
-JKop
Also MSDOS and MS WIndows were developed in C, as far as I know.
Catalin
HHHHHHaaaaaaaaaaaaaaaaa ha ha haaaaaaaaaaaaaaaaaaaaaaaa
HHaaaaaaaaaaaaaaaaa HHHHHAaaaaaaaaaa
ha ha ha
OOOHhhhhhhhhh, it's too much.
Didn't we switch from coal to oil yyeeaarrss ago?
-JKop
It seems not :D
Catalin
You're totally correct in this. But most higher-order toolkits are written
in C++.
BTW: I don't know a single piece of "real" software that was written in LISP
(AFAIK even Emacs only uses LISP as an extension and scripting language:
Something that is really bad bevhaviour according to the original troll ..
eerrh ... poster).
I am also awaiting good examples for LISP 3D-Engines, LISP- OS kernels, LISP
device drivers, LISP text processors or LISP numerical toolkits. Feel free
to copy your whole project source code for these topics to your
news-transfer-daemon /dev/null...
> MS-DOS was written in C++. Window XP was written in C++. Linux was written
> in C++.
You're funny! :-)
> BTW: I don't know a single piece of "real" software that was written in LISP
What's the color of the sky in your world?
>> C++ game developers spend a lot of their time debugging corrupted
>> memory. Few, if any, compilers offer completely safe modes.
> AKA Retarded mode.
> (inspired response to obvious troll)
What part of "Do not feed the trolls" was hard to understand?
--
Christopher Benson-Manica | I *should* know what I'm talking about - if I
ataru(at)cyberspace.org | don't, I need to know. Flames welcome.
As other industries using C++ - even for highly graphical, rich-content
physics simulations - report fewer of these problems, the game programming
culture itself might be to blame.
> C++ game developers spend a lot of their time debugging corrupted
> memory. Few, if any, compilers offer completely safe modes.
The alternative, garbage collection, tends to corrupt memory too. Have you
heard of a high-availability Visual Basic program?
Game programmers need efficient and deterministic garbage collection. If
they don't code it themselves, following healthy styles, they will corrupt
memory.
> Unsurprisingly, there is a very high failure rate among projects using
> C++ for modern game development.
That's because there's a high failure rate period, and most games use C++.
> You can not even change function definitions while the program is
> running and see the effects live (the ultimate debugging tool).
There are those who don't need to debug. The game programming industry has
only begun to adopt unit testing in a very few shops.
> Alternatively, you can't execute a small portion of the program
> without compiling and linking the whole thing, then bringing your game
> into a specific state where your portion of the code is being executed.
Test isolation would help that. If objects are decoupled, you can write a
test that plays with only one of them.
Playing with unit test cases, and adding them very easily, is a great way to
preserve all those little experiments, and convert them into constraints.
> The static type system locks you into a certain design, and you can't
> *test* new ideas, when they come to you, without redesigning your
> whole class hierarchy.
Then don't use the static type system.
> C++ is so inflexible, even those who do use it for games, have to
> write their game logic in some other language (usually very slow,
> inexpressive and still garbage collected). They also have to interface
> the two languages.
You make that sound like a bad thing. Most programs have two languages
(consider the glorious union of VB and SQL). Games need a scripting layer to
decouple designing the game play from its engine. Most other applications
with an engine use this model, too.
> C++ lacks higher-order functions. Function objects emulate them
> poorly, are slow and a pain to use. Additionally, C++ type system does
> not work well with function objects.
So what? It also makes the Prototype Pattern a pain in the nuts. These
issues are not in the domain, they are just implementation alternatives.
> C++ programs can not "think" of new code at run-time, and plug that
> new code into themselves in compiled form. Not easily, anyway.
So, uh, use the scripting layer?
> C++ coding feels very repetitive, for example, when writing class
> accessors, you often have to write const and non-const methods with
> completely identical function bodies. Just look at STL.
It sounds like you need to tell us you are less than perfectly adept at C++.
Have you used it for games?
> When programming in C++ you feel like a blind person trying to draw
> something. You don't _see_ the data structures that your procedures
> will operate on. Lisp programming is much more visual.
That's because you are familiar with Lisp.
> Constructors and smart pointers make it hard to tell cheap operations
> from expensive ones.
All cheap and expensive operations are impossible to predict and hard to
tell apart. Profile.
> C++ lacks automatic memory management and so it encourages copying
> objects around to make manual memory management manageable.
> Reference-counting schemes are usually slower than modern garbage
> collectors and also less general.
Prefer pass-by-reference above all other kinds, because its cognitively
efficient and usually execution efficient.
> Most important, C++ syntax is irregular, and you often find yourself
> typing repetitive patterns again and again - a task easily automated
> in languages with simpler syntax. There are even books on C++
> patterns, and some C++ experts take pride in being able to execute
> those patterns with computer-like precision - something a computer
> should be doing to begin with.
C++ syntax is somewhat irregular. But it's lack of a 'read_mind' keyword
disturbs me most.
> C++ programs are slow: even though the compilers are good at
> micro-optimizing the code, programmers waste their time writing
> repetitive patterns in C++ and debugging memory corruption instead of
> looking for better algorithms that are far more important for speed
> than silly micro-optimizations.
How could that complaint be specific to C++?
> It's hard to find good programmers for C++ projects, because most of
> the good programmers graduated to languages like Lisp or avoided C++
> altogether. C++ attracts unimaginative fellows with herd mentality.
> For creative projects, you want to avoid them like a plague.
That's because educating someone to write low-risk C++ is difficult. Vendors
have clogged our markets with low-quality languages that purport to allow
inept programmers to write code at a lower risk than C++ provides.
Games must have high performance, so C++ is the leading language for now.
--
Phlip
http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces
>> (inspired response to obvious troll)
> What part of "Do not feed the trolls" was hard to understand?
> Christopher Benson-Manica | I *should* know what I'm talking about - if I
> ataru(at)cyberspace.org | don't, I need to know. Flames welcome.
[I wouldn't normally participate in a flame-infested thread like
this, but since you're so literally asking for for things you need
to know..]
While I agree that the OP was trolling by posting his article to
c.l.c++, it is still my opinion that he was in many parts correct and
the "inspired" response was nothing but a display of complete
ignorance. Many of the issues raised by the OP are truly something you
need to know about (if you don't already), regardless of which
programming language you prefer, IMHO.
--
Frode Vatvedt Fjeld
I don't want to feed the troll, but you're showing some pretty incredible
ignorance. Just go to the "success stories" section of any lisp vendor's
website if you want to find some examples of real applications written in
lisp.
LISP 3D-Engines: None that I know of.
LISP kernels: Plenty. Google for "lisp machine".
LISP device drivers: Same as for kernels.
LISP numerical toolkits: Probably they exist; don't know of any. This is
just a library issue.
Alex
Ignorance? Arrogance maybe? (no pun intended)
As for the issues raised being truly something you need to know about... do
you need to tell a child not to eat its own excrement? No. Why? It figures
that out for itself. If you're writing code and you have "new" all over the
place and you've no "delete"'s, then you'll figure out the aim of the whole
"Garbage Collection" ideal. I myself am not retarded, so I've no need for
"Garbage Collection". If, hypothetically speaking, I forsaw that I would
temporarily become retarded (a golfclub to the head maybe), then I would
make use of auto_ptr, but that has yet to happen.
Isn't great how we're all entitled to our own opinions! ;-P
-JKop
any other issues are fairly minor (there is a lack of convinient syntax for
many things, but these things are not that hard to pull off through other
means). some things would be nice (syntactic closures and lexical scoping,
...), but do not justify many other costs.
using languages other than c or c++ tends to end up being more expensive.
this post can be generally be referred to as trolling, however.
"Neo-LISPer" <neo_l...@yahoo.com> wrote in message
news:87k6tf9...@yahoo.com...
> Catalin Pitis wrote:
>>
>> "JKop" <NU...@NULL.NULL> wrote in message
>> news:o97fd.40000$Z14....@news.indigo.ie...
>>>
>>>>> Come to think of it, what *wasn't* written in C++?
Pretty much everything worthwhile.
>>>> Linux comes to mind.
>>>
>>>
>>> Really? What was it written in?
>>>
>> C
>>
>> Also MSDOS and MS WIndows were developed in C, as far as I know.
I doubt it. I'd bet MSDOS was written in assembler.
> You're totally correct in this. But most higher-order toolkits are written
> in C++.
> BTW: I don't know a single piece of "real" software that was written in LISP
I.e., you're ignorant. [By the way, it's spelled "Lisp", not "LISP"]
> (AFAIK even Emacs only uses LISP as an extension and scripting language:
> Something that is really bad bevhaviour according to the original troll ..
> eerrh ... poster).
Which Emacs? Emacs was original TECO macros (hence the name), then
Lisp. (Some) Unix versions are now written in C with (a crufty
ancient) Lisp as "extension language", yes (but there's hardly any
call for your "only": a fair amount of the core functionality is in
Lisp, and there's rather more Lisp than C there)
> I am also awaiting good examples for LISP 3D-Engines, LISP- OS kernels, LISP
> device drivers, LISP text processors or LISP numerical toolkits. Feel free
Google for "Mirai" and "Genera", for starters.
[You C++ types still have a way to go to catch up to 1980's Lisp :-)]
--
Malum est consilium quod mutari non potest -- Publilius Syrus
(setq reply-to
(concatenate 'string "Paul Foley " "<mycroft" '(#\@) "actrix.gen.nz>"))
> The alternative, garbage collection, tends to corrupt memory too. Have you
> heard of a high-availability Visual Basic program?
>
> Game programmers need efficient and deterministic garbage collection. If
> they don't code it themselves, following healthy styles, they will corrupt
> memory.
Erm no, garbage collection does not corrupt memory unless the garbage
collector is buggy. Have you ever seen a VB program segfault? If you have
special requirements for garbage collection, you just need a garbage
collector tuned to your requirements (e.g. the real-time garbage collectors
available in some implementations of various languages).
>> Alternatively, you can't execute a small portion of the program
>> without compiling and linking the whole thing, then bringing your game
>> into a specific state where your portion of the code is being executed.
>
> Test isolation would help that. If objects are decoupled, you can write a
> test that plays with only one of them.
>
> Playing with unit test cases, and adding them very easily, is a great way
> to preserve all those little experiments, and convert them into
> constraints.
Unit tests help you to /test/ code, but not to debug it. When you've found a
bug, you still need to do the work of actually fixing the code. This is
much easier if you have an environment which supports dynamic redefinition
and interative compilation.
>> The static type system locks you into a certain design, and you can't
>> *test* new ideas, when they come to you, without redesigning your
>> whole class hierarchy.
>
> Then don't use the static type system.
Not using the static type system would entail not using C++ (unless you
intend to represent all your data as void * and fill your program with
casts).
>> C++ is so inflexible, even those who do use it for games, have to
>> write their game logic in some other language (usually very slow,
>> inexpressive and still garbage collected). They also have to interface
>> the two languages.
>
> You make that sound like a bad thing. Most programs have two languages
> (consider the glorious union of VB and SQL). Games need a scripting layer
> to decouple designing the game play from its engine. Most other
> applications with an engine use this model, too.
Clearly a separate scripting language is not desirable if you can avoid it
(performance and code overhead from interfacing the two languages, being
constrained by an enforced separation between interelated parts of the
program, etc.) It is difficult to avoid having a separate scripting
language in C++ because it doesn't support incremental compilation. It's
certainly not impossible to avoid it (Half-Life doesn't have one, for
example), but you pay the price: make a trivial mistake in your Half-Life
mod code and the fix-reload-test cycle can be a couple of minutes long.
>> When programming in C++ you feel like a blind person trying to draw
>> something. You don't _see_ the data structures that your procedures
>> will operate on. Lisp programming is much more visual.
>
> That's because you are familiar with Lisp.
Indeed, it's much easier to express complex data structures in Lisp.
>> C++ lacks higher-order functions. Function objects emulate them
>> poorly, are slow and a pain to use. Additionally, C++ type system does
>> not work well with function objects.
>
> So what? It also makes the Prototype Pattern a pain in the nuts. These
> issues are not in the domain, they are just implementation alternatives.
This makes no sense. Higher order functions are a win, period. Function
objects are just less powerful than higher order functions (assuming that
these functions are closures). If you've been paying attention to R&D in
programming languages for the past 20-30 years, you'll notice that higher
order functions are one abstraction mechanism that just about every new
language has adopted (often prior to the development of C++). They're so
useful that some people have invested a lot of time into hacking some kind
of HOF facility into C++ (c.f. the Boost Lambda Library).
>> C++ lacks automatic memory management and so it encourages copying
>> objects around to make manual memory management manageable.
>> Reference-counting schemes are usually slower than modern garbage
>> collectors and also less general.
>
> Prefer pass-by-reference above all other kinds, because its cognitively
> efficient and usually execution efficient.
"Usually" being the operative word. You're also missing the OP's point
completely. If you pass by reference, you enormously increase the
complexity of your memory management code. Sure, C++ has lots of ways to
encapsulate this complexity, but you still have to deal with it, and the
common methods of doing this amount to using a buggy and rather inefficient
GC.
Alex
Actually, it tends to indicate that one version returns a non-const
pointer/reference and the other returns a const pointer/reference.
[Followups trimmed]
Stewart.
> Hey
>
> Recently, I researched using C++ for game programming and here is what
> I found:
>
> C++ game developers spend a lot of their time debugging corrupted
> memory. Few, if any, compilers offer completely safe modes.
<snip>
> Alternatively, you can't execute a small portion of the program
> without compiling and linking the whole thing, then bringing your game
> into a specific state where your portion of the code is being executed.
Yes I can. I can write a module that runs one or two functions from the
project as the whole program.
<snip>
> It's hard to find good programmers for C++ projects, because most of
> the good programmers graduated to languages like Lisp or avoided C++
> altogether. C++ attracts unimaginative fellows with herd mentality.
> For creative projects, you want to avoid them like a plague.
<snip>
I guess the answer is: if you can't find anyone to join your C++ team,
start your project in a language that everyone likes.
Have you checked out D? It addresses a handful of your issues....
Stewart.
Yes, it does suck -- however it sucks a lot less than a majority of the other
available languages.
If you ever decide to come out of your cave and re-join society in a beneficial
manner, you should probably consider thoroughly studying the topic of languages
and posting a comprehensive comparison of them w/ respect to a particular
task/topic (such as game programming).
my only experience with lisp is autolisp in autocad. it was ok, but
personally, i lean more to python. to each his own.
Neo-LISPer wrote:
> C++ game developers spend a lot of their time debugging corrupted
> memory. Few, if any, compilers offer completely safe modes.
*NON-STATEMENT*
i seriously doubt that. i would bet c++ programmers spend the bulk of
their time designing and testing - this is what separates professionals
from hobbyists. i have no numbers to back my claim up, but neither does
the op.
does anyone have any data on how c++ or lisp programmers divide their time?
> Unsurprisingly, there is a very high failure rate among projects using
> C++ for modern game development.
*NON-STATEMENT*
again, no numbers. but the fact that there are a lot of failed c++-based
projects is probably most likely due to the fact that there are a lot of
c++-based projects.
> You can not even change function definitions while the program is
> running and see the effects live (the ultimate debugging tool).
*BULLSHIT*
actually, i'm sure you can with some tools (it wouldn't be hard to do),
but i don't really see that as the ultimate debugging tool. more likely
a crutch for programmers who don't know what they're doing and have to
program by trial and error. but that's just opinion.
> Alternatively, you can't execute a small portion of the program
> without compiling and linking the whole thing, then bringing your game
> into a specific state where your portion of the code is being executed.
*BULLSHIT*
the very nature of the c++ compile-link model is designed to allow you
to work in modules. poor design could force you to recompile the whole
thing for a trivial change, but that's not the language's fault, it's
the programmer's.
as for specificially executing a small portion, look up unit testing.
> The static type system locks you into a certain design, and you can't
> *test* new ideas, when they come to you, without redesigning your
> whole class hierarchy.
*SURREAL BULLSHIT*
there is so little sense in this statement that i don't even know how to
argue it. suffice it to say that every statement in the above is at
least mostly false.
> C++ is so inflexible, even those who do use it for games, have to
> write their game logic in some other language (usually very slow,
> inexpressive and still garbage collected). They also have to interface
> the two languages.
*SOMEWHAT TRUE, BUT MISSES POINT*
firstly, the line: "those who do use it for games, have to write their
game logic in some other language" is bullshit. no-one "has" to do
anything in c++, or lisp i imagine. still many choose to voluntarily.
this statement also undermines the op's point (assuming he had any). if
c++ is so awful, why do people use it anyway, and go through the effort
of interfacing it with other languages? and are you saying that c++ is
fast and expressive, cause that seems contradictory to the rest of your
"argument"?
one of the fundamental tenets of design is to separate data from logic.
"game" logic is not the same as "program" logic. simply put, program
logic describes how the program works, and game logic describes how the
game works. good design would suggest that you could make a game engine
then use that same engine to run many different types of games. the game
engine is the program logic, and the game logic is part of the actual game.
here's an example. say i write a wicked fps game engine. the program
logic is how the engine interacts with the sound, graphics and input
hardware. now i make a game for that engine. some of the stuff i'll have
to add includes things like sound and graphics data, but i'll also be
adding game logic, like the flow of the game (ex. once you get into the
hangar, you then start the second chapter, the search for the
laboratory), descriptions of sub-quests (ex. to get the medal of honour,
you must destroy the missle launch computers before any of them fire)
and even the ai (ex. search for cover if any is available, otherwise
just bum rush the player). those things can change from game to game, so
they should be separate from the main engine.
the other important thing to consider is that proper programming
practice requires that if you change any part of a module, you have to
retest the whole thing. so if i were to include the game logic in the
game engine, and i wanted to make the ninjas more agressive, i'd have to
retest the whole game engine. that's just idiotic. so i make the game
logic separate.
so that's why the game logic is separate, but why make it in another
language? because c++ is a very complicated language, and using it to
write a ninja's ai is like using a backhoe to plant a tulip. also, like
any powerful tool, if used incorrectly, you can take a limb off. the
power of c++ is not required when making game logic (most of the time,
there are exceptions). so instead, make your game logic in a simple
language so that the artists and game designers can understand it. that
way, valuable programmer time isn't diverted from optimizing and testing
the engine, and the game designers and artists get more control over the
game, which is what they want i imagine.
in summary: c++ is *so* flexible that it allows you to interface with
other langauges that are less powerful, but simpler. doing this allows
you to delegate non-critical sections of non-engine code to be written
in languages more easily learned and used by non-programmers.
> C++ lacks higher-order functions. Function objects emulate them
> poorly, are slow and a pain to use. Additionally, C++ type system does
> not work well with function objects.
*SURREAL BULLSHIT*
wha? the only thing i can partly understand there is that function
objects are a pain to use. fair enough. but that's a matter of opinion.
and as c++ compilers become more standards compliant, we'll be able to
take more advantage of thinks like Boost.Lambda, which mostly negates
the issue.
every other statement above is nonsense.
> C++ programs can not "think" of new code at run-time, and plug that
> new code into themselves in compiled form. Not easily, anyway.
*BULLSHIT*
of course they can. and as easily as any other language's program too.
they can even be jit compiled, if you want, but they're so fast by
default that there isn't much interest in it.
> C++ coding feels very repetitive, for example, when writing class
> accessors, you often have to write const and non-const methods with
> completely identical function bodies. Just look at STL.
*TRUE*
power comes at a price.
> When programming in C++ you feel like a blind person trying to draw
> something. You don't _see_ the data structures that your procedures
> will operate on. Lisp programming is much more visual.
*OPINIONATED BULLSHIT*
i see my code and data perfectly well, thank you.
> Constructors and smart pointers make it hard to tell cheap operations
> from expensive ones.
*IRRELEVANT BULLSHIT*
then rtfm to find out which operations are expensive. furthermore, you
can't accidently call a constructor (unless you really don't know what
you're doing) - constructor calls are blatantly obvious.
as for smart pointers, if your smart pointer is expensive to use, get a
better one. making an expensive smart pointer is not smart at all - in
fact, it's dumber than making an expensive 3d vector class (which is
remarkably stupid).
> C++ lacks automatic memory management and so it encourages copying
> objects around to make manual memory management manageable.
> Reference-counting schemes are usually slower than modern garbage
> collectors and also less general.
*SURREAL BULLSHIT*
c++ has automatic memory management. it's called the stack. as for
whether c++ "encourages" unnecessary object copying, considering the
many mechanisms c++ includes for avoiding it i'd have to say that's a
bit of a stretch. of course, if you *want* to manually manageme memory,
you're free to do so.
also... garbage collection requires reference counting, einstein. and
even if it didn't you can implement garbage collection in c++.
> Most important, C++ syntax is irregular, and you often find yourself
> typing repetitive patterns again and again - a task easily automated
> in languages with simpler syntax. There are even books on C++
> patterns, and some C++ experts take pride in being able to execute
> those patterns with computer-like precision - something a computer
> should be doing to begin with.
*UNCLEAR BULLSHIT*
i'm not 100% sure what you're talking about here. if you're talking
about repetetive syntax, i already conceded that above. but if you're
talking about *design patterns*, then you don't have a clue what you're
talking about.
design patterns are constructs used to model the behaviour of code. they
are not literal patterns of code.
> C++ programs are slow: even though the compilers are good at
> micro-optimizing the code, programmers waste their time writing
> repetitive patterns in C++ and debugging memory corruption instead of
> looking for better algorithms that are far more important for speed
> than silly micro-optimizations.
*BULLSHIT*
complete crap.
> It's hard to find good programmers for C++ projects, because most of
> the good programmers graduated to languages like Lisp or avoided C++
> altogether. C++ attracts unimaginative fellows with herd mentality.
> For creative projects, you want to avoid them like a plague.
*FUNNY BULLSHIT*
moo.
this is the first time i've been compared to a plague.
> It is my opinion that all of the above makes C++ a very bad choice for
> commercial game development.
i'm sure john carmack would love your input.
my own opinion is that any given game could be done in c++ alone with
not much problem. the same is true for lisp. however, combining the
strengths of the two would lend more power to the game programmer to
make better games with less work in less time. can't we all just get along?
indi
> C++ syntax is somewhat irregular. But it's lack of a 'read_mind' keyword
> disturbs me most.
stroustrup probably meant to put that in, but there would have been way
to many issues with null mind pointers.
indi
Retard.
-JKop
>>Prefer pass-by-reference above all other kinds, because its cognitively
>>efficient and usually execution efficient.
>
>
> "Usually" being the operative word. You're also missing the OP's point
> completely. If you pass by reference, you enormously increase the
> complexity of your memory management code. Sure, C++ has lots of ways to
> encapsulate this complexity, but you still have to deal with it, and the
> common methods of doing this amount to using a buggy and rather inefficient
> GC.
i'm sorry, you don't know what a c++ reference is.
you do not increase the difficulty of memory management with c++
references, unless you're straddling two different threads. you don't
have to manage the memory of references at all - they are, after all,
references. references have no overhead either.
there is no complexity, and there is no encapsulation of c++ references
that I have ever heard of. nor can i see any valid reasons for it.
you may be thinking of some other kind of reference, as in
reference-counted references, aka smart pointers.
indi
> hmm. I remember vividly doing lisp at uni.
> I think the assignment was a simple long division problem. I remember that
> only a few people in the entire class managed to work out a way of achieving
> it... I problem that is a newbie would do in C without breaking a sweat.
Post the C version (or just give a fuller spec) and I'll try it in Lisp.
>
> After that, no-one ever used lisp again...
>
> ..except the Jax & Daxter developers. Their game engine runs on an
> interpretted lisp platform (I believe) and has spawned some of the most
> impressive platformers I've ever seen...
>
> So the moral is....
> I don't know, but I won't be switching to Lisp any time soon...
> Maybe its good once you get the hang of it...
> But I think it may be too recursive & bottom-up programming for most brains
> to want to deal with...
I wonder which Lisp you were using. The modern Common Lisp has all kinds
of ways to iterate, some simple, some complex, and one (LOOP) which is
effectively a built-in iteration mini-language. So no one is forced to
do recursion, and many Lispniks frown on it style-wise if an iterative
approach would work.
Recursion takes a while to get the hang of, but not long and then it can
be quite elegant.
kenny
> Hey
>
> Recently, I researched using C++ for game programming and here is what
> I found:
[Snip -- This section already commented on by others elsethread.]
>It is my opinion that all of the above makes C++ a very bad choice for
> commercial game development.
It is your opinion and you are welcome to it.
My observation from watching shows on video games is that there
are only a handful of different types. My guess is that the
engine is written in some language, Pascal, C++, Lisp, and
each "level" is written in a higher level language. Once the
engine is working, they don't change it. Most of the changes
are made using the higher level language. A video game project
doesn't want its time wasted in coding up each level in C++,
C, Pascal, Ada or whatever. A less time-consuming method is
to write each level using a higher level language. Many
game shops have specialized languages for their engines.
Perhaps you need to learn that the choice of the language
is not the issue. The issue is the quality of the product
that one can produce using the given language. If the
shop dictates that assembly is the language, then the
company must produce the best quality product using
assembly. In many shops, there is no choice on which
language can be used. You use their language and live
with it.
So if you are independently developing games, then by
all means, use the language you are most comfortable
with. However, do the rest of us a favor and keep
your opinions of other languages to yourself. The
issue of the "best" language for a given project is
and will always be a religous issue.
--
Thomas Matthews
C++ newsgroup welcome message:
http://www.slack.net/~shiva/welcome.txt
C++ Faq: http://www.parashift.com/c++-faq-lite
C Faq: http://www.eskimo.com/~scs/c-faq/top.html
alt.comp.lang.learn.c-c++ faq:
http://www.comeaucomputing.com/learn/faq/
Other sites:
http://www.josuttis.com -- C++ STL Library book
> c++ has automatic memory management. it's called the stack.
And won't manage your memory unless you fit a certain usage pattern, but
hey.
> also... garbage collection requires reference counting, einstein. and
> even if it didn't you can implement garbage collection in c++.
>
It doesn't require reference counting, and reference counting is severely
flawed as GC systems go - the only real advantage being the predictable
performance. You can at best implement conservative GC for a C++ program.
A proper garbage collector needs to know more about the structures it's
working on than a C++ implementation can hope to discover about arbitrary
data.
While I personally prefer Lisp for my side projects, C++ is not an
absolutely horrid language for game development. It is definitely much
better than C; overloading operators and templates allow a "weak man's
macro system" and virtual functions provide some much-needed dynamicness.
> C++ game developers spend a lot of their time debugging corrupted
> memory. Few, if any, compilers offer completely safe modes.
Why must both sides of the garbage collection debate go to such extremes?
Pro-garbage collectors say that garbage collection prevents all resource
leaks. I know that this is not true as I have seen many resource
"leaks" occurring because the programmer forgot that some part of the
code would hold a reference to some data.
Anti-garbage collectors say that garbage collection is fast, but with
unreliable slowdowns. While this is true, I have found that in
application development, the slowdown is not noticeable at all. I view
arena allocation and deallocation as a more game-centric garbage
collector, which works quite well in many games.
On the whole, I find that I rarely need garbage collection in my
programs. But when I do need garbage collection, I *really* need it. I
think Lisp is right here to default to the more general solution
(garbage collection) unless you explicitly tell it otherwise.
> Unsurprisingly, there is a very high failure rate among projects using
> C++ for modern game development.
There is a very high failure rate among modern game development
projects, period. I don't think this has as much to do with the
programming language as it does with insane schedules/no real
direction/increased expectations. This is improving. Supposedly.
That doesn't mean that using a more high level language wouldn't speed
up development -- it would.
> You can not even change function definitions while the program is
> running and see the effects live (the ultimate debugging tool).
>
> Alternatively, you can't execute a small portion of the program
> without compiling and linking the whole thing, then bringing your game
> into a specific state where your portion of the code is being executed.
A REPL debugging mechanism allowing function definitions is extremely
cool. If it had an integrated unit-test tester, that would be super
good. It's so good that I find myself implementing a half-assed REPL
test loop for any language which doesn't have a built in one. In C++ it
looks like this:
void functionToTest( int a, string& b );
int main() {
int a;
string b;
while( true ) {
BKPT();
functionToTest( a, b );
}
}
Every reasonable debugger will let me change the values of a and b.
It's a Read, Eval, Print Loop with out the Reading. :P
> The static type system locks you into a certain design, and you can't
> *test* new ideas, when they come to you, without redesigning your
> whole class hierarchy.
I can't comment on this one way or the other, as I only have a few
months experience with Lisp. My guess is that the implicit interface
will prove similarly difficult to modify with Lisp as the static type
system does with C++.
Any eXtreme Programming Lispers or C++-ers care to comment?
> C++ is so inflexible, even those who do use it for games, have to
> write their game logic in some other language (usually very slow,
> inexpressive and still garbage collected). They also have to interface
> the two languages.
This is a very valid point. Game logic is one of the most obvious
applications of a REPL and function redefinition at runtime, which is
why many games use a language such as Python that supports such
features. As stated above, this is a great feature for *any* part of
development.
> C++ lacks higher-order functions. Function objects emulate them
> poorly, are slow and a pain to use. Additionally, C++ type system does
> not work well with function objects.
Really? I find that function object emulate higher order functions
extremely well. It would be nice if the dispatch on them could be
either static (using templates and overloading) or dynamic (using
virtual functions), but it's simple to create such a library.
The main problem I have with function objects is that writing a separate
class is distributing logic in a way that's both confusing and annoying.
I should not have to write a separate class for this one function when
all the logic for that class is available to the compiler.
I already know about Boost.Lambda and I have found it to be impossible
to use. Every time I make a little change in my lambda function, I get
tons upon tons of errors. I last tried to use Boost.Lambda about a year
ago; has it improved its usability?
> C++ programs can not "think" of new code at run-time, and plug that
> new code into themselves in compiled form. Not easily, anyway.
I assume you are thinking about closures like cl-ppcre uses. While this
is doable in C++ using template meta-programming, it's so convoluted to
not be usable. C++ really needs a macro system like Lisp's.
> C++ coding feels very repetitive, for example, when writing class
> accessors, you often have to write const and non-const methods with
> completely identical function bodies. Just look at STL.
A better macro system would help here again. I know that I'd much
rather write:
DEF_ACCESSOR( int foo(), { /* code */ } );
than
int foo_impl() const { /* code */ };
int foo() const { return foo_impl(); }
int foo() { return foo_impl(); }
The first is just more to the point.
> When programming in C++ you feel like a blind person trying to draw
> something. You don't _see_ the data structures that your procedures
> will operate on. Lisp programming is much more visual.
I have no idea what you are saying here. Can you give an example of a
problem where the solution in Lisp is much more visual than in C++?
> Constructors and smart pointers make it hard to tell cheap operations
> from expensive ones.
Any language with flexible abstractions is going to have this problem.
Can you tell me if foo() is faster or slower to execute than bar(a,b)?
> C++ lacks automatic memory management and so it encourages copying
> objects around to make manual memory management manageable.
> Reference-counting schemes are usually slower than modern garbage
> collectors and also less general.
See above for my feelings on garbage collection.
> Most important, C++ syntax is irregular, and you often find yourself
> typing repetitive patterns again and again - a task easily automated
> in languages with simpler syntax. There are even books on C++
> patterns, and some C++ experts take pride in being able to execute
> those patterns with computer-like precision - something a computer
> should be doing to begin with.
I agree here as well. C++'s syntax is awful, and its deduction rules
are even worse. This made Boost.Lambda to be such a pain to use. A
good macro system would help here as well.
> C++ programs are slow: even though the compilers are good at
> micro-optimizing the code, programmers waste their time writing
> repetitive patterns in C++ and debugging memory corruption instead of
> looking for better algorithms that are far more important for speed
> than silly micro-optimizations.
As stated before, macros would be nice. Unlike template meta
programming, I find that quasi quotation macros make sense.
> It's hard to find good programmers for C++ projects, because most of
> the good programmers graduated to languages like Lisp or avoided C++
> altogether. C++ attracts unimaginative fellows with herd mentality.
> For creative projects, you want to avoid them like a plague.
It seems to be quite easy to find good programmers for C++ projects.
It's hard to find great C++ programmers; the kind that think outside the
proverbial box. Lisp seems to encourage such thinking by its very
nature of being a programmable programming language. They're there, but
they are also much more rare.
If this is a good or bad thing is left to the company
> It is my opinion that all of the above makes C++ a very bad choice for
> commercial game development.
Since you stated many strengths of Lisp, it is natural that on most of
these statements, Lisp is better. On the whole, Lisp and C++ are very
similar (except with regards to macros), Lisp and Python/Java/C#/your
favorite dynamic language even more so. Lisp and C++ just take
different approaches to what the default assumption should be:
Lisp tries to assume the most general, and let the programmer specify
specifics when the generic solution is too slow. Lisp prematurely
pessimizes.
C++ tries to assume the fastest possible thing, and let the programmer
specify generics when needed. C++ prematurely optimizes.
So which is worse? Premature optimization is the root of all evil, but
premature pessimization is the leaf of no good. I prefer to start with
the general case and then optimize the hell out of it if needed, but
other programmers differ.
-- MJF
>> Most important, C++ syntax is irregular, and you often find yourself
>> typing repetitive patterns again and again - a task easily automated
>> in languages with simpler syntax. There are even books on C++
>> patterns, and some C++ experts take pride in being able to execute
>> those patterns with computer-like precision - something a computer
>> should be doing to begin with.
>
>
> *UNCLEAR BULLSHIT*
> i'm not 100% sure what you're talking about here. if you're talking
> about repetetive syntax, i already conceded that above. but if you're
> talking about *design patterns*, then you don't have a clue what you're
> talking about.
>
> design patterns are constructs used to model the behaviour of code. they
> are not literal patterns of code.
But there is no reason they can't be. After all, structures are pattern
with all the following properties:
* (Speed)
You can access all the members of a structure in constant time.
* (Atomicness)
Given a pointer to a structure, you can access all of the members.
* (Heterogeneous)
A structure's members can hold different types
Given these requirements, its trivial to write a quasi quotation macro
that has all these properties; just place the members down in sequential
positions in memory. Why not automate that?
This is where Lisp macros shine; they allow you to express design
patterns *as code*, making describing them much more explicit, and using
them much easier. Templates somewhat serve this purpose, but I find
template meta programming to be way to confusing compared to the simpler
and more direct quasi quotation system. Compare the template meta
programming examples in Alexadrescu's "Modern C++ Design" to the quasi
quotation examples in Graham's "On Lisp".
-- MJF
To move away from the technicalities, you have basically three memory
management options in C++:
1) Pass by value. Sometimes inefficient, easy to understand.
2) Pass by pointer/reference. Usually efficient, memory management becomes
complex and scattered throughout the code.
3) Pass by smart pointer. Easy to understand, but usually more overhead than
a GC.
(This is not to say that one of these methods must be used exclusively
throughout a program).
Alex
> the very nature of the c++ compile-link model is designed to allow you
> to work in modules. poor design could force you to recompile the whole
> thing for a trivial change, but that's not the language's fault, it's
> the programmer's.
>
> as for specificially executing a small portion, look up unit testing.
What if you make a change to a module that every other module depends on?
Then you have to recompile the whole program, which you wouldn't always
have to do in Lisp. Unit testing is not a pratical replacement for a REPL.
If you just want to (say) play around with some standard library functions
to prototype some new code, you cannot do that with a unit testing
framework. The Lisp philosophy is that if you want to execute some bit of
code, you should just be able to say: "execute this bit of code". There
should not be a unit testing framework complicating such a simple action.
>> The static type system locks you into a certain design, and you can't
>> test new ideas, when they come to you, without redesigning your
>> whole class hierarchy.
>*SURREAL BULLSHIT*
>there is so little sense in this statement that i don't even know how to
>argue it. suffice it to say that every statement in the above is at
>least mostly false.
It's semi-bullshit. Static type systems can make refactoring more tricky.
Ocasionally you might have to redesign your class hierachy. OP is just
exaggerating here.
> [discussion of game engine/logic separation]
Writing different modules of a program in a different language is not an
optimal modularisation technique. Ideally you would be able to write all
modules of your program in the same language (while still keeping them
entirely separate, if you wished). C++ makes this difficult in some cases.
The idea that high-level scripting languages are less powerful than C++ is
pretty absurd, given that they tend to have much better abstraction
features. The reason they are needed is that they are more powerful (at
least in the sense of fewer lines of code for the same bang).
Alex
With C++ you can't vary safe/fast mode on per function or per line
basis.
> > You can not even change function definitions while the program is
> > running and see the effects live (the ultimate debugging tool).
>
> Nothing to do with the language. Such a debugging tool could be developed,
> why not develop it?
> I myself wouldn't use it.
Actually, you can change program during execution (at least in VC++
7.1) however in half of cases you have to stop and recompile it.
> > Alternatively, you can't execute a small portion of the program
> > without compiling and linking the whole thing, then bringing your game
> > into a specific state where your portion of the code is being executed.
>
> That's because there's no such thing as "half a program". If you really want this, copy-paste it to another file and just append:
>
> int main(){}
> to the end of it.
Than cut and add "main()" to another file and stop/start program each
time ??? In Lisp you can share intermediate data between small
portions of the program.
> > The static type system locks you into a certain design, and you can't
> > *test* new ideas, when they come to you, without redesigning your
> > whole class hierarchy.
>
> Bullshit. Vague bullshit.
You forced to think of types all time. It's kind of unnecessary
foresight in many cases.
> > C++ is so inflexible, even those who do use it for games, have to
> > write their game logic in some other language (usually very slow,
> > inexpressive and still garbage collected). They also have to interface
> > the two languages.
>
> Provide an example. I myself forsee no reason or motive to do or have to do > this.
You have to use interpreter for scripting. Hardly It would be
interpreter of C++.
> > C++ lacks higher-order functions. Function objects emulate them
> > poorly, are slow and a pain to use. Additionally, C++ type system does
> > not work well with function objects.
>
> "function objects". Get over it! It's just syntatic sugar!
Really? IMHO they are very limited imitation of closures.
> > C++ programs can not "think" of new code at run-time, and plug that
> > new code into themselves in compiled form. Not easily, anyway.
>
> "think of new code at run-time". That's because it takes intelligence to
> write code, something which computers lack. As for the code coming from
> somewhere else, well it's done extremely easily actually - we call it
> dynamic linkage.
I.e. C++ self-reflection sucks.
> > C++ coding feels very repetitive, for example, when writing class
> > accessors, you often have to write const and non-const methods with
> > completely identical function bodies. Just look at STL.
>
> Incorrect.
>
> If both function bodies are identical, then there's no need to write a non-
> const version.
>
> If there exists both a const version and a non-const version, then this
> indicates that one version alters the object, while the other doesn't.
> Conclusion: different code.
So "cut and paste" technology is used all time ;-))
> You could also make the non-const version call the const version, and then
> just do something extra.
>
> > When programming in C++ you feel like a blind person trying to draw
> > something. You don't _see_ the data structures that your procedures
> > will operate on. Lisp programming is much more visual.
>
> "procedures"? Never heard of them. I've heard of "functions" alright. I must
> say I don't... see... your argument, no pun intended.
> If you have a function which takes in an object of a certain class, or as
> you call it "data structure", then... (actually, it's so simple I'm not even > going to finish this paragraph).
It means that on Lisp you can easily try current piece code to see it
does well and write another small piece. In C++ you forced to write as
much as possible before you start visualize results (maybe because
"edit and continue" doesn't work properly?).
> > C++ lacks automatic memory management and so it encourages copying
> > objects around to make manual memory management manageable.
>
> int auto k = 4;
>
> int* auto p_w = new int(4);
From MSDN:
"The auto storage-class specifier declares an automatic variable, a
variable with a local lifetime. It is the default storage-class
specifier for block-scoped variable declarations
An auto variable is visible only in the block in which it is declared.
Few programmers use the auto keyword in declarations because all
block-scoped objects not explicitly declared with another storage
class are implicitly automatic. Therefore, the following two
declarations are equivalent:
// auto_keyword.cpp
int main()
{
auto int i = 0; // Explicitly declared as auto.
int j = 0; // Implicitly auto.
}
"
Is that automatic memory managment ???
> > Reference-counting schemes are usually slower than modern garbage
> > collectors and also less general.
>
> Which "garbage collector"? "less general" = vague bullshit.
What's the difference ??? Anyway reference-counting is slower than
modern generational GCs.
> > Most important, C++ syntax is irregular, and you often find yourself
> > typing repetitive patterns again and again - a task easily automated
> > in languages with simpler syntax.
>
> I don't see your argument. I've never encountered such.
It means no macro. C preprocessor and templates is hardly comparable
with Lisp macro.
> > C++ programs are slow: even though the compilers are good at
> MS-DOS was written in C++. Window XP was written in C++. Linux was written
> in C++.
>
> Come to think of it, what *wasn't* written in C++?
You are mistaken: MS-DOS and Linux were not written in C++ (maybe
because C++ compilers at that time were very buggy). And also very
doubtful that Windows XP core was written in C++.
Many many things were not written in C++.
Lisptracker
> I do know what a C++ reference is. I was using the term rather ambiguously
> between a reference in general (i.e. a C++ pointer or a C++ reference) and
> an actual C++ reference, because it doesn't make much difference.
ah, but it does, because there is a difference between general
references and c++ references, and that difference invalidates your
entire argument.
> References increase memory management complexity just as much as pointers
> do; the only difference is that you can't really do any memory management
> with a reference, so it's only /correctly/ useable in bits of code which
> can take a reference/pointer to a data structure without worrying how it
> has been allocated. This does not in any way simplify the management of
> references (general concept) in C++ -- you still have to think about who
> "owns" the memory that is referenced (general concept), and either keep all
> the details in your head, or use C++ smart pointers along with an
> inefficient reference-counting method or some such.
c++ references are non-owning references ("weak" references if you
will). therefore, no management required, or even really logical.
you are correct when you say that you do have to be concerned that the
underlying data does not go out of scope before the reference. however,
in practical code, all references are always valid. unless you perform
some acrobatics (ie, bad code design) or are dealing with threading
issues, references will naturally go out of scope before the "real"
object. there is no "management". the only things you do to references
is make them and use them. you can't copy them around really.
you really never need to know how data was allocated or where it is in
memory (and even which memory it is in) to work with it in c++, unless
you are allocating or freeing it, and that responsibility should be in
the same place. but by convention, when data is passed or returned by
reference, it is not the responsibility of any client code.
> If it was possible to
> write real-world C++ programs using only references and not pointers, I
> would take your point,
of course it's possible. but it's also possible to build an airplane
from scratch using only a screwdriver. it's just not really smart, or easy.
using references only is actually easy to do, it just unnecessarily
limits your design options.
> but the fact remains that passing by
> reference/pointer invloves more thinking about memory management than
> passing by value, which cannot possibly cause a memory leak (unless you
> have something funky in a copy constructor).
references cannot cause memory leaks.
when passing bare pointers around (which i personally frown on), you
have to make clear in documentation who owns the pointer. otherwise,
yes, you can end up with memory leaks. or, you can do it the right way
and pass smart pointers around.
> To move away from the technicalities, you have basically three memory
> management options in C++:
>
> 1) Pass by value. Sometimes inefficient, easy to understand.
> 2) Pass by pointer/reference. Usually efficient, memory management becomes
> complex and scattered throughout the code.
false. memory management becomes complex and scattered if you make it
complex and scatter it. the use or non-use of pointers or references
does not cause or solve that.
in good design, memory is freed by the same entity that allocated it.
the use of pointers or references does not invalidate good design. it
allows you to if you so choose, and sometimes that's a valid design
decision. but if your memory management is scattered and out of control,
that's your fault, not any language's.
> 3) Pass by smart pointer. Easy to understand, but usually more overhead than
> a GC.
i honestly have never heard that before in my life. please show me
numbers. here are mine, off a paper advocating garbage collection
(http://www.lisp-p.org/wgc/) no less:
manual gc type
0.0 0.0 static allocation
0.0 0.9 individual blocks
0.9 16.3 individual lists
0.3 15.3 blocks in random order (general case)
50 times slower. hm.
now, this page (http://www.boost.org/libs/smart_ptr/smarttests.htm)
shows that the worst case performance for smart pointers is about 3
times slower than naked pointers. therefore, smart pointers are still
about 17 times faster than garbage collection. again, please show me
your numbers.
the smart pointer included with my compiler std::auto_ptr<T> takes up
exactly the same amout of memory as a bare pointer. copying it costs one
4 byte stack push, one function call, one 4-byte memory access and one
4-byte memory write. interleaved on a modern processor, those operations
(according to my estimate) would cost the same a simple bare-pointer
copy in practical usage (taking things like return-value optimization
into account). in other words, there should be no measurable difference.
now, smart pointer is a category, not a specific type, so it's quite
possible that there are smart pointers out there that are more expensive
than a given garbage collector. but show me some numbers. i don't see
either std::auto_ptr<T> or std::tr1::shared_ptr<T> (aka
boost::shared_ptr<T>) being one of them.
one thing that most gc advocates i've talked to don't seem to get is
that when using garbage collection, *all* pointers are smart pointers.
the only difference between gc smart pointers and traditional c++ smart
pointers is that when they go out of scope, c++ smart pointers free the
memory, whereas gc smart pointers just indicate (doesn't matter how they
do it) to the garbage collector that the memory can be freed at its
convenience. of course, smart pointers in c++ could do the same thing,
and i have one that does. garbage collection is just a special case of
the smart pointer design pattern.
besides, with a garbage collected-only language, you can't practically
implement raii, and i think that's a serious flaw.
there are no perfect solutions, but by cobbling together a collection of
imperfect solutions, you can still do pretty well. i don't consider
garbage collection "new, untested, & questionable", as this page
(http://www.lisp-p.org/wgc/) bizarrely claims "C & C++ programmers" do.
if it is the best solution, i will use it. if it's not, i'll use
something else. but not having options sucks.
indi
C and assembly. Look at the source for it.
And I don't think MS-DOS could've been written in C++, wasn't that before
C++ was around?
Personally, I couldn't care less! What're you trying to accomplish, change
someone's mind? No, you're just trolling. So get lost, and let us do some
real work.
-Howard
>>the very nature of the c++ compile-link model is designed to allow you
>>to work in modules. poor design could force you to recompile the whole
>>thing for a trivial change, but that's not the language's fault, it's
>>the programmer's.
>>
>>as for specificially executing a small portion, look up unit testing.
>
>
> What if you make a change to a module that every other module depends on?
> Then you have to recompile the whole program, which you wouldn't always
false. interface and implementation are separated by a bright line in c++.
if you change the *interface*, then yes, you have to recompile
everything. i don't know lisp, but i cannot believe that the same is not
true there.
if you change the *implementation* then you only have to recompile that
module and relink.
> If you just want to (say) play around with some standard library functions
> to prototype some new code, you cannot do that with a unit testing
> framework.
why not?
> The Lisp philosophy is that if you want to execute some bit of
> code, you should just be able to say: "execute this bit of code". There
> should not be a unit testing framework complicating such a simple action.
fair enough, but this is just a difference in philosphy. i don't see how
executing isolated "bits" of code is useful for any purpose but testing.
hence, unit testing.
>>>The static type system locks you into a certain design, and you can't
>>>test new ideas, when they come to you, without redesigning your
>>>whole class hierarchy.
>
>
>>*SURREAL BULLSHIT*
>>there is so little sense in this statement that i don't even know how to
>>argue it. suffice it to say that every statement in the above is at
>>least mostly false.
>
>
> It's semi-bullshit. Static type systems can make refactoring more tricky.
> Ocasionally you might have to redesign your class hierachy. OP is just
> exaggerating here.
and making a pointless arguement. enough redesign will lead to complete
rewrites in any language. refactoring can always be tricky. about the
only point of contention is how much redesign and refactoring causes
problems in a given language, compared to another language. how would
you measure such a nebulous concept? if it can't be measured, it
shouldn't be argued.
besides, the argument can easily be turned around. static typing means
design locks, therefore looser typing ambiguates code because changes
may have unintended side-effects that stricter typing could avoid. there
is no point in making a comparison on a point like that.
>>[discussion of game engine/logic separation]
>
> Writing different modules of a program in a different language is not an
> optimal modularisation technique.
i never claimed it was optimal. i stated that every language has it's
strengths and weaknesses, and if you are going to be modularizing
anyway, and one module would be better to be done in another language,
go for it.
> Ideally you would be able to write all
> modules of your program in the same language (while still keeping them
> entirely separate, if you wished). C++ makes this difficult in some cases.
false. of course it's true that in a perfect world it would be best to
write a program all in one perfect language that does everything well.
nothing i have said suggests that that's not a good idea. however, as i
explained, game logic is not program logic, it is actually program data.
therefore, it is not a *module* of the program, it is *input* to that
program.
given that the game logic is so conceptually separate from the game
program itself, it's not really such a big step to do it in another
language that is simpler to interpret at runtime. as a bonus, you can
also choose a language that is easier on non-programmers so that more
people can make content for your game.
in fact - if your game is *really* kick-ass, it might be able to handle
game logic written in several languages, so that content creators can
choose what they like best.
as for your claim that c++ makes interfacing with other langauges
difficult in some cases - c++ has been interfaced with thousands of
other langauges. explain which cases you mean.
> The idea that high-level scripting languages are less powerful than C++ is
> pretty absurd, given that they tend to have much better abstraction
> features. The reason they are needed is that they are more powerful (at
> least in the sense of fewer lines of code for the same bang).
i usually avoid using the word "powerful" in this context, because it
leads to idiotic discussion. if i slipped it in this time, mea culpa.
"power" is nonsense in terms of a programming language. you claim that
power means better abstraction facilities. i could just as rightly claim
that it means performance.
in the general case, good abstraction negates good performance. why? you
said it yourself: "fewer lines of code for the same bang (ie, it does
more, i guess)". simplifying you get: "each line of code does more".
considering that "doing" anything "costs" in terms of speed and/or
memory, i could say: "each line of code costs more". therefore higher
abstraction is more expensive. qed.
now, if you need better abstraction and you are willing to sacrifice
performance, then by all means, use a high-level scripting language.
such is the case for game logic - it's not necessarily time critical,
and the more people that can contribute with greater ease, the better.
but don't ever waste my time by trying to argue that high-level
scripting languages are perfect for every job. no language is.
indi
> And I don't think MS-DOS could've been written in C++, wasn't that before
> C++ was around?
The first native C++ compiler I used under DOS was Zortech C++. This
must have been around 1988/1989.
Petter
--
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?
Note that it may more innocent (even if it is Mike Cox!), could be just
the overenthusiastic lisp newbie effect (no one more zealous than the
recently converted) - if so, then, Neo-LISPer, please don't do that.
Two wrongs don't make a right [2].
I urge people in both language communities to write games in whatever
languages they bloody well feel like, and to not feed the trolls. Like
I probably just did by posting this. Gaah.
[2] See recent silly "Why Lisp Sucks for Games" thread on comp.lang.lisp
caused by an article by someone self-identifying as a microsoft c++
programmer and apparently feeling some compulsion to attack (somewhat
poorly) the use of lisp in games. Or, better, don't...
No it doesn't.
> c++ references are non-owning references ("weak" references if you
> will). therefore, no management required, or even really logical.
No management is /possible/. If you use a reference incorrectly, management
might be /required/, but not actually performed. Say you have a function
which returns a reference to a newly allocated array (this is obviously a
dreadful thing to do, but you could do it) -- you would get a memory leak.
C++ itself does not stop you writing code like this. Hence, you still have
to think about memory management when you write code using references, if
only to say: "right, it's ok to use a reference instead of a pointer here".
> however, in practical code, all references are always valid. unless you
perform
> some acrobatics (ie, bad code design) or are dealing with threading
> issues, references will naturally go out of scope before the "real"
> object. there is no "management". the only things you do to references
> is make them and use them. you can't copy them around really.
I agree. But it's not trivial to use references correctly. You have to be
able to prove to yourself that what you actually want is a weak reference
to some data before you can be sure that you are not introducing memory
allocation bugs. It's not hard to do this, but you're still thinking about
memory management.
All this discussion is really missing the point anyway. Let's say for the
sake of argument that everything you've said re references is correct. You
still have exactly the difficulties I described when you're using pointers
in C++ -- you can either have complex memory management code, or you can
use inefficient reference counting (boradly speaking, there is obviously a
middle ground, and in simple programs memory management isn't too hard).
>> If it was possible to
>> write real-world C++ programs using only references and not pointers, I
>> would take your point,
>
> of course it's possible. but it's also possible to build an airplane
> from scratch using only a screwdriver. it's just not really smart, or
> easy.
That's exactly my point. Pointers (if not references) make memory management
hard. Writing C++ programs without using pointers is painful.
> when passing bare pointers around (which i personally frown on), you
> have to make clear in documentation who owns the pointer. otherwise,
> yes, you can end up with memory leaks. or, you can do it the right way
> and pass smart pointers around.
Yes, and to get back to what I was saying, smart pointers are not as
efficient as a good GC. This is, I emphasise, different from saying that a
program which uses smart pointers in places is less efficient than the same
program using a GC.
> in good design, memory is freed by the same entity that allocated it.
> the use of pointers or references does not invalidate good design.
In a simple program this is possible. You can't seriously be suggesting that
this little nugget of design wisdom (wise as it is) is going to make
programming in C++ almost as easy as programming in a language with a GC.
In my personal experience this just isn't the case (and I'm perfectly
capable of using smart pointers, etc., when it makes sense).
>> 3) Pass by smart pointer. Easy to understand, but usually more overhead
>> than a GC.
>
> i honestly have never heard that before in my life. please show me
> numbers. here are mine, off a paper advocating garbage collection
Reference counting is a really poor garbage collection method (both in terms
of performance and because of the problem of circular references). If a
large portion of your program uses reference counting, it will be less
efficient than the same program in a comparable GC'd language (unless the
comparable GC'd language is slower than C++ for other reasons, of course --
often the case).
> now, this page (http://www.boost.org/libs/smart_ptr/smarttests.htm)
> shows that the worst case performance for smart pointers is about 3
> times slower than naked pointers. therefore, smart pointers are still
> about 17 times faster than garbage collection. again, please show me
> your numbers.
If this statement were remotely true, GC'd languages would use smart
pointers (which usually means reference counting) and not generational GC.
Most don't -- certainly not serious compilers like javac or cmucl. I didn't
read the paper very carefully, but it appeared to be comparing GC to manual
memory management in a limited variety of test cases. Smart pointers are
not really manual memory management; they're a poorly implemented GC
extension to the language. If you don't use them very often, you may get
better overall performance since you'd have 90% manual memory management
and 10% inefficient automatic GC (or something like that). Also, consider
space efficiency, in which reference counting quite obviously loses (like
everything here, this can be a minor or major issue depending on the
program you're writing). The way you compute the 17x figure is not
completely unreasonable, but given that it's a pretty crude combination of
results from two different tests both chosen to support a particular
argument, we shouldn't be too surprised if it's completely wrong. Once
again: if smart pointers are so fast, GC'd languages would use reference
counting and get roughly the same performance; in reality most don't.
IMO there's really no point in throwing around figures in a discussion about
GC performance. There are too many factors. I accept that manual memory
management is faster than GC in most programs, but when memory management
gets hairy, reference counting smart pointers are a pretty inefficient way
of getting automatic memory management. In these cases, a real,
non-reference-counting GC is likely to perform better, if only because it
will be more thoroughly tested and integrated with the language.
Alex
Depending on the amount of memory and the size of each allocated block
garbage collection can be faster, or manual handling can be faster. If
you have many small to medium sized objects then garbage collection is
faster, otherwise not so. Copious amount of research exist. Start
some where around here:http://www.hpl.hp.com/personal/Hans_Boehm/gc/.
Retarded? It has nothing to do with being retarded. Sometimes the
requirements of a program is such that you cannot predict when a
resource can be freed without designing this into your program
(I can't really think of many examples where this is not the case)
and that can be hard. When GC can sometimes even make your program
faster... No brainer. (And for the reading impaired, I am not
saying... Nah, figure it out yourself)
--
Thomas.
This paragraph demonstrates a serious misunderstanding of the
difference between scope, extent, and the way GC works. I suppose
that it is understandable that a C++ programmer might get scope and
extent confused (I certainly did when all I knew was C and C++). I
will try to make my explanation simple:
The scope of a variable, as you probably already know, is the portions
of source code from which it is accessible. The extent of a variable
is the period of time during execution of the program when the
variable is accessible. A Garbage Collector is concerned with the
extent of variables (and other places where objects can be referenced
from).
Simple distinction: scope is a source-code concept; extent is a
run-time concept.
In a C or C++ program, when you declare a local variable, what you get
is called a "lexically scoped, dynamic extent" variable. This means
that the scope of the variable is determined by lexical markers ("{"
and "}" here) and the variable only lasts as long as program control
is within this same scope. This happens to correspond to the
implementation of variables by storing them on the stack.
In a Common Lisp, Scheme, Smalltalk, Ruby, ML, or Haskell (among
others) program, when you declare a local variable, what you get is a
"lexically scoped, indefinite extent" variable. Once again, the scope
of the variable is determined by some lexical marker, but now the
extent is not known. It is no longer a simple matter to determine
when the variable is not needed any more. Sometimes it can be proven
that a variable is dynamic extent, or it may be declared that way, but
otherwise it must be assumed to be indefinite extent.
In C there is a concept of a "static" variable to a function. This
variable also has "indefinite extent."
In addition, Common Lisp also has the notion of an "indefinite scope,
dynamic extent" variable. This is often referred to as a "dynamically
scoped" variable, but this is a misnomer. Common Lisp calls these
"special variables" to distinguish them from normal lexically-scoped
variables, and they must be declared special before use.
High-level languages use lexically scoped, indefinite extent variables
because of a useful concept known as higher-order functions.
Higher-order functions are functions that can treat functions as
normal objects. When combined with lexical scope, this allows
variables to be "closed over" by inner functions which may be passed
to other functions or returned. If a variable is "closed over" by a
function that is returned from a function, then that variable cannot
go away, because that function may still need it later on. Hence,
"indefinite extent."
The simplest way to manage these indefinite extent variables is to use
Garbage Collection. When the GC is invoked, it locates all live,
accessible objects and treats the rest of the heap as free space.
There are a number of ways of doing this, such as marking live objects
somehow as you trace them, or copying them over into a new space. The
primary difference between GC and reference-counting is that
reference-counting must do book-keeping everytime a variable or a
reference is mutated, and GC does not.
There is no such thing as a "GC smart pointer," as evidenced by the
Boehm conservative collector which works on C programs. The closest
concept to a smart pointer would be the use of a bit which categorizes
a machine word as either a pointer or an integer, when you use a
precise collector (as opposed to a conservative collector). But this
occurs at a level below the actual language semantics; it is not
visible to the programmer.
--
;; Matthew Danish -- user: mrd domain: cmu.edu
;; OpenPGP public key: C24B6010 on keyring.debian.org
OK, mostly true. There are some exceptions though, for example relating to
the dynamic nature of CLOS and the types of sequences (e.g. replace a list
with a vector at runtime, and if your code only uses generic sequence
operations it won't break). Also, there need not be any recompiling in lisp
as such. You can recompile individual functions, or change the values of
individual variables. This can be a real help.
> fair enough, but this is just a difference in philosphy. i don't see how
> executing isolated "bits" of code is useful for any purpose but testing.
> hence, unit testing.
I really don't know what to say. I thought everyone found REPLs useful (but
perhaps not worth the trouble of implementing for a particular language, of
course). You can do the same stuff with unit testing, sure. It's like the
difference between the following two scenarios.
1) I want some bread. I go to the shop, buy some bread.
2) I want some bread. I create a shopping list containing the single item
"bread", the name of the shop and the expected result of the expedition
(i.e. I will have obtained some bread). I then go to the shop to buy the
bread, afterwards checking against the shopping list that I have indeed
bought bread.
It's very useful to be able to get the value of an expression in a few
seconds without having to edit and/or compile a file. I do it all the time
when I'm writing Lisp programs, and unit tests would be the Wrong Thing.
You should not be writing tests for prototypical code -- only
classes/functions/whatever which have stable interfaces. The last thing you
want is to have lots of throwaway legacy test code hanging around.
> besides, the argument can easily be turned around. static typing means
> design locks, therefore looser typing ambiguates code because changes
> may have unintended side-effects that stricter typing could avoid. there
> is no point in making a comparison on a point like that.
Yes fair point. Although it should be pointed out that C++'s static type
system is pretty gross compared to the state of the art (i.e. ML, Haskell,
etc.)
> i never claimed it was optimal. i stated that every language has it's
> strengths and weaknesses, and if you are going to be modularizing
> anyway, and one module would be better to be done in another language,
> go for it.
Yes I agree.
> false. of course it's true that in a perfect world it would be best to
> write a program all in one perfect language that does everything well.
> nothing i have said suggests that that's not a good idea. however, as i
> explained, game logic is not program logic, it is actually program data.
> therefore, it is not a *module* of the program, it is *input* to that
> program.
>
> given that the game logic is so conceptually separate from the game
> program itself, it's not really such a big step to do it in another
> language that is simpler to interpret at runtime. as a bonus, you can
> also choose a language that is easier on non-programmers so that more
> people can make content for your game.
Mostly fair points. However in Lisp you could actually compile the game
logic code at runtime, and use it as an input to the game engine program
without having to write an interpreter. All I'm saying is that this is a
very attractive alternative to having a separate script language, and in
Lisp you would still have the possibility of implementing a separate script
language, just as in C++.
> in the general case, good abstraction negates good performance. why? you
> said it yourself: "fewer lines of code for the same bang (ie, it does
> more, i guess)". simplifying you get: "each line of code does more".
> considering that "doing" anything "costs" in terms of speed and/or
> memory, i could say: "each line of code costs more". therefore higher
> abstraction is more expensive. qed.
Rather depends on how efficient each line of code is, and whether you end up
with the same number of lines, but I don't want to pick an argument on this
topic; I think we basically agree.
> now, if you need better abstraction and you are willing to sacrifice
> performance, then by all means, use a high-level scripting language.
> such is the case for game logic - it's not necessarily time critical,
> and the more people that can contribute with greater ease, the better.
> but don't ever waste my time by trying to argue that high-level
> scripting languages are perfect for every job. no language is.
I really wouldn't argue that. I'd argue that in Lisp you could have the
convenience of loading and compiling game logic code at runtime /and/ the
performance of a mature compiled language. No sacrifices necessary ;)
Alex
don't feed the troll!
By any reasonable standards, C++'s type system is a brain-damaged
throwback to the 1960s. Much better static type systems can be found
in languages like Haskell and ML. Hell, there's even an assembler with
a better type system:
http://www.cs.cornell.edu/talc/
> > C++ lacks higher-order functions. Function objects emulate them
> > poorly, are slow and a pain to use. Additionally, C++ type system does
> > not work well with function objects.
>
> "function objects". Get over it! It's just syntatic sugar!
Yeah, and with just a bit more typing, you've got yourself a Turing
machine. You can do anything you want in C++, so why would you ever
want to switch to a better language?
> > When programming in C++ you feel like a blind person trying to draw
> > something. You don't _see_ the data structures that your procedures
> > will operate on. Lisp programming is much more visual.
>
> "procedures"? Never heard of them. I've heard of "functions" alright. I must
> say I don't... see... your argument, no pun intended.
Where do you think the word "function" comes from? It comes from
mathematics, where it has a very well-defined meaning and certain very
well defined properties. One of these properties is that for any
input, it always returns the same output. This particular meaning has
been universally adopted by the computer science community. In almost
all languages (Miranda and Haskell are the only two exceptions I
know), what you call "functions" are actually procedures or
subroutines, in that they can return different outputs for the same
input. This includes pseudo "functional" languages like ML too, since
it has assignment, references and mutable arrays.
Vladimir
And MS-DOS was first written in the early 80's.
> I really don't know what to say. I thought everyone found REPLs useful (but
> perhaps not worth the trouble of implementing for a particular language, of
> course). You can do the same stuff with unit testing, sure. It's like the
> difference between the following two scenarios.
>
> 1) I want some bread. I go to the shop, buy some bread.
> 2) I want some bread. I create a shopping list containing the single item
> "bread", the name of the shop and the expected result of the expedition
> (i.e. I will have obtained some bread). I then go to the shop to buy the
> bread, afterwards checking against the shopping list that I have indeed
> bought bread.
>
> It's very useful to be able to get the value of an expression in a few
> seconds without having to edit and/or compile a file. I do it all the time
> when I'm writing Lisp programs, and unit tests would be the Wrong Thing.
> You should not be writing tests for prototypical code -- only
> classes/functions/whatever which have stable interfaces. The last thing you
> want is to have lots of throwaway legacy test code hanging around.
This is interesting. I thoroughly agree with you of course, the REPL is
invaluable.
The interesting thing is, I see my unit tests as a serialisation of my
REPL sessions... because that's literally what they are. I wrote an ELisp
function that takes a buffer full of my interactions with beanshell (a
Java interpreter), and creates a new buffer containing source for a
collection of unit tests that JUnit can execute. Each test simply
executes the code that I typed in the REPL, line-by-line, and checks that
any values returned are the same as they were when I was using the REPL.
The only caveat is, of course, dealing with unprintable values. That's
usually pretty easy to fix with just a little manual editing of the
resulting tests though.
It's been a long while since I wrote a unit test by hand - these days I do
them all this way. I don't think it's a coincidence that I have much
higher test coverage than any of my colleagues who write all their tests
by hand.
Cheers,
Bill.
--
"If you give someone Fortran, he has Fortran. If you give someone Lisp,
he has any language he pleases." -- Guy Steele
LOL, you're hilarious. Stop researching and write a game, then tell us what
you should write it in.
WTH
> Petter Gustad wrote:
> > "Joe Laughlin" <Joseph.V...@boeing.com> writes:
> >
> >> And I don't think MS-DOS could've been written in C++,
> >> wasn't that before C++ was around?
> >
> > The first native C++ compiler I used under DOS was
> > Zortech C++. This must have been around 1988/1989.
> And MS-DOS was first written in the early 80's.
That was my point...
Predictable? Not so!
You can't always be sure how much the next Release() call is going to
cost!
Sometimes you drop the last reference on some object which is holding
on to a bunch of last references to other objects which are holding on
to more last references to yet more objects ...
>> And MS-DOS was first written in the early 80's.
>
> That was my point...
I don't know what Tim Paterson used when he wrote the original QDOS in
6 weeks in 1980, but I very much doubt that it was C, which I think
was still a rather experimental thing used by strange cult of unixists
at that point.
--
(espen)
>> And MS-DOS was first written in the early 80's.
>
> That was my point...
I don't know what Tim Paterson used when he wrote the original QDOS in
6 weeks in 1980, but I very much doubt that it was C, which I think
was still a rather experimental thing used by the strange cult of
unixists at that time.
--
(espen)
Well, you could always have release() stop working after a specific number
of objects have been released, and save the list of current workees so it
can continue the next time it's called...
That's incremental collection, and it has a cost. It does show the
possibility, though, even if I wouldn't want to use reference-counting for
it.
(Not by itself, anyway - it can be useful when the references are to data on
a disk. But I digress.)
> Recursion takes a while to get the hang of, but not long and then it can
> be quite elegant.
In my C/C++ programming daze, I used to use recursion when
appropriate. Comming from Basic and Fortran 77, C's ability to do
recursion was actually quite nice.
As for that (trimmed) long division problem... I thought the /
opperator handled that.
--
An ideal world is left as an excercise to the reader.
--- Paul Graham, On Lisp 8.1
[snip]
What is it about Lisp that makes novices fall in love with it? They
don't actually *produce* anything with it, mind you, but they insist on
telling the world how superior it is to every other language.
Dude, write a good game in Lisp and that will impress us a lot more than
a load of blather about how C++ is unsuitable for game programming. The
thing is, we use C++ for game programming, so we know whether it is
suitable or not. We've even seen a few games in our time that are
written in Java, Delphi, Flash or VB. But from the Lispers, all we see
is the same old hot air...
- Gerry Quinn
> Predictable? Not so!
>
> You can't always be sure how much the next Release() call is going to
> cost!
>
> Sometimes you drop the last reference on some object which is holding
> on to a bunch of last references to other objects which are holding on
> to more last references to yet more objects ...
>
Sure, but at least you can predict things like total runtime. It's just
about possible to use reference counting in a realtime situation, it's not
possible for a large number of other algorithms.
>
>>
>> After that, no-one ever used lisp again...
>>
>> ..except the Jax & Daxter developers. Their game engine runs on an
>> interpretted lisp platform (I believe) and has spawned some of the most
>> impressive platformers I've ever seen...
>>
>> So the moral is....
>> I don't know, but I won't be switching to Lisp any time soon...
>> Maybe its good once you get the hang of it...
>> But I think it may be too recursive & bottom-up programming for most
>> brains
>> to want to deal with...
>
> I wonder which Lisp you were using. The modern Common Lisp has all kinds
> of ways to iterate, some simple, some complex, and one (LOOP) which is
> effectively a built-in iteration mini-language. So no one is forced to
> do recursion, and many Lispniks frown on it style-wise if an iterative
> approach would work.
>
Well, sounds like its probably very different to the lisp we used 15 years
go. All we could seem to do is have lots of brackets with calls to functions
embedded..
2*(5+Speed)+Velocity
looked something like:
(+ (* 2 (5 Speed +)) Velocity)
> Recursion takes a while to get the hang of, but not long and then it can
> be quite elegant.
>
>
> kenny
>
> -JKop
Many run on natural gas.
Note followup.
Paul
I say WE use it. Its not mandatory, but if you don't use it beware, coz
fragmentation will crash your program (out of memory) way before you are
really out of memory.
In this system, you have a set of chunks (blocks of contiguous memory). Game
objects reside in a particular chunk and allocates come out of that chunks.
Frees do nothing are not used.
When a game is finished with a chunk (ie. you exit the current level) you
through out the chunk completely and all memory is lost.
Thus there is NO fragmentation.
This system also allows us to stream chunks of memory in and out, so we
might have 3 chunks:
A,B
A is used for the game level.
B is used for the game character.
We may switch character by throwing out B and loading up a new character
into B. The level stays in A and is unaffected by the operation.
As I don't know modern LISP, and I've never heard of a LISP game on a
CONSOLE, I can't argue why you can't do this in LISP.
If anyone else has failures, I doubt its because of C++. I have noticed
that some C++ coders have the worst habits of overcomplicating simple
problems that I've ever seen. Again, not C++'s fault, though templates,
references, constructors and inheritance makes it easy to write code that is
unreadable without a debugger.
To stop from failures, we restrict usage of C++ to as simple as possible. We
have our own memory management systems that are far better than garbage
collectors because they don't fragment memory at all, which is critical on a
console because you can't afford to run out of memory due to unforseen
fragmentation.
Hi,
Gerry Quinn wrote:
|
| What is it about Lisp that makes novices fall in love with it? They
| don't actually *produce* anything with it, mind you, but they insist on
| telling the world how superior it is to every other language.
|
| Dude, write a good game in Lisp and that will impress us a lot more than
| a load of blather about how C++ is unsuitable for game programming. The
| thing is, we use C++ for game programming, so we know whether it is
| suitable or not. We've even seen a few games in our time that are
| written in Java, Delphi, Flash or VB. But from the Lispers, all we see
| is the same old hot air...
Please see:
for games written with Lisp,
http://www.franz.com/success/customer_apps/animation_graphics/naughtydog.lhtml
what is it about Lisp,
http://alu.cliki.net/RtL%20Highlight%20Film
Regards,
Jorge Tavares
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.5 (GNU/Linux)
iD8DBQFBfkqNKHEUoQoCFcIRAkTtAJ0R1RxJ3EQB6VKy4TCC3foNCNa4jwCeKwup
UWXzHD90kcK+pL6984OEZMk=
=IVnR
-----END PGP SIGNATURE-----
> All this GC collection talk is not relevant to Console games programming.
> On consoles, (PS2, XBox, Gamecube), we use non-fragmenting memory
> architectures.
>
> I say WE use it. Its not mandatory, but if you don't use it beware, coz
> fragmentation will crash your program (out of memory) way before you are
> really out of memory.
>
> In this system, you have a set of chunks (blocks of contiguous memory). Game
> objects reside in a particular chunk and allocates come out of that chunks.
> Frees do nothing are not used.
> When a game is finished with a chunk (ie. you exit the current level) you
> through out the chunk completely and all memory is lost.
>
Sure, variants on pooling seem pretty common in console coding from what
I've heard. If you know all the blocks of memory in a chunk are the same
size, would you consider running a GC anyway? This'd give you a
combination of no fragmentation (owing to how the memory's allocated) and
not having to wait 'til the end of the chunk's lifetime to reclaim memory.
> We have our own memory management systems that are far better than garbage
> collectors because they don't fragment memory at all, which is critical on
> a console because you can't afford to run out of memory due to unforseen
> fragmentation.
I'll have to address this.
While I understand that you don't want to use GC on a console, where you
presumably want every scrap of performance you could possibly get, it isn't
true that GC fragments memory.
There is an entire class of copying collectors that, as a neccessary
consequence of copying, defragment memory on the fly. You don't need to
lose half your memory to the GC, either; while it's true that you can only
use N-1 segments of memory for actual data, N need not be 2.
Of course, if N *isn't* 2 then you need to combine the copying collector
with something like a mark-and-sweep collector, which is probably slow.
I'm mostly interested in this from the viewpoint of OS design, where the
slow collector can be run in the idle loop while a fast generational
collector takes care of low-memory situations if they should occur, so this
probably doesn't apply to your situation.
That said, there's nothing fundamental preventing you from doing your own
memory management (Arena allocation, say) in Lisp - you just need a Lisp
version that has a lot more GC/memory-controlling declarations than current
editions do. See Barking Dog.
End note:
GC-ed applications are probably no more a victim of fragmentation than
applications using malloc/free, and can be far less so.
You don't use malloc/free very much on a console, though, do you?
> We have our own memory management systems that are far better than garbage
> collectors because they don't fragment memory at all, which is critical on
> a console because you can't afford to run out of memory due to unforseen
> fragmentation.
I'll have to address this.
While I understand that you don't want to use GC on a console, where you
presumably want every scrap of performance you could possibly get, it isn't
true that GC fragments memory.
There is an entire class of copying collectors that, as a neccessary
consequence of copying, defragment memory on the fly. You don't need to
lose half your memory to the GC, either; while it's true that you can only
use N-1 segments of memory for actual data, N need not be 2.
Of course, if N *isn't* 2 then you need to combine the copying collector
with something like a mark-and-sweep collector, which is probably slow.
I'm mostly interested in this from the viewpoint of OS design, where the
slow collector can be run in the idle loop while a fast generational
collector takes care of low-memory situations if they should occur, so this
probably doesn't apply to your situation.
That said, there's nothing fundamental preventing you from doing your own
memory management (Arena allocation, say) in Lisp - you just need a Lisp
version that has a lot more GC/memory-controlling declarations than current
editions do. See Naughty Dog.
(loop for i from 0 to 10 collect i) ; Create a list of numbers from 0 to 10.
As are the less exotic macros:
(let ((lst 0))
(dotimes (i 11)
(push i lst))
(reverse lst)) ; Create a list of numbers from 1 to 10.
(Not intended to be sensible examples, Lisp gurus...) This syntax does not
in any way prevent you from doing long division ;) Sounds like you were
probably just not /taught/ the imperative features of Lisp, and got the
erroneous impression that it was all lists and recursive functions. Lisp
doesn't force you to use recursion any more than C++ does.
Alex
> one thing that most gc advocates i've talked to don't seem to get is
> that when using garbage collection, *all* pointers are smart
> pointers. the only difference between gc smart pointers and
> traditional c++ smart pointers is that when they go out of scope, c++
> smart pointers free the memory, whereas gc smart pointers just
> indicate (doesn't matter how they do it) to the garbage collector that
> the memory can be freed at its convenience. of course, smart pointers
> in c++ could do the same thing, and i have one that does. garbage
> collection is just a special case of the smart pointer design pattern.
This is pretty much totally incorrect and shows a stunning lack of
understanding of how gc works (even obsolete designs, let alone state
of the art generational collectors). It also is a good indication of
why much of what else you say is confused and otherwise misguided.
If you want to spend cycles in a discussion of this topic, I strongly
suggest you get a book[1] on gc and read it first. At the very least
you will have a much better chance of not being dismissed as a kook.
/Jon
1. Suggestion: http://www.cs.kent.ac.uk/people/staff/rej/gcbook/gcbook.html
--
'j' - a n t h o n y at romeo/charley/november com
Maahes wrote:
>>>hmm. I remember vividly doing lisp at uni.
>>>I think the assignment was a simple long division problem. I remember
>>>that
>>>only a few people in the entire class managed to work out a way of
>>>achieving
>>>it... I problem that is a newbie would do in C without breaking a sweat.
>>
>>Post the C version (or just give a fuller spec) and I'll try it in Lisp.
>>
>
> :) It was 15 years ago so I only have the vague memories and impressions
> left...
Well I saw "newbie would do in C without breaking a sweat" so I thought
you could just toss off the solution in your mail editor. :)
What is the idea? Write an algorithm that works the way we work when we
do long division, complete with remainders? How do you want the answer?
Just the integer result and remainder as, in C, a second result returned
in a writable parameter?
>
>
>>>After that, no-one ever used lisp again...
>>>
>>>..except the Jax & Daxter developers. Their game engine runs on an
>>>interpretted lisp platform (I believe) and has spawned some of the most
>>>impressive platformers I've ever seen...
>>>
>>>So the moral is....
>>>I don't know, but I won't be switching to Lisp any time soon...
>>>Maybe its good once you get the hang of it...
>>>But I think it may be too recursive & bottom-up programming for most
>>>brains
>>>to want to deal with...
>>
>>I wonder which Lisp you were using. The modern Common Lisp has all kinds
>>of ways to iterate, some simple, some complex, and one (LOOP) which is
>>effectively a built-in iteration mini-language. So no one is forced to
>>do recursion, and many Lispniks frown on it style-wise if an iterative
>>approach would work.
>>
>
> Well, sounds like its probably very different to the lisp we used 15 years
> go. All we could seem to do is have lots of brackets with calls to functions
> embedded..
>
> 2*(5+Speed)+Velocity
>
> looked something like:
>
> (+ (* 2 (5 Speed +)) Velocity)
You mean (+ 5 Speed), of course. And I guess that is just a made-up
example, because I do not recognize the physics.
How about S = 1/2at^2 + vi*t?
(defun distance (accel time initial-velocity)
(+ (* initial-velocity time)
(/ (* accel (expt time 2)) 2)))
Note that I just typed that in and may have unbalanced parens (or other
gaffes), but when writing Lisp my editor helps me with parens so much
that I never think about them (and did not after the first month of Lisp).
And one of the biggest ways the editor helps is by /automatically/
indenting expressions based on how I have nested my expressions. So if I
get careless, I hit carriage return and the autoindentation goes
someplace conspicuously whacky. But maybe the bigger win here is during
refactoring, where I can totally trash the indentation of a fat 50-line
function and re-indent the whole thing in one command.
I know VC6 handles auto-indenting nicely, too. I never tried it on a
large region of refactored code, but I imagine it handles that, too. My
point is that "lots of brackets" is not a minus for Lisp. In fact, it is
a plus because my editor also lets me select, copy, cut, and paste any
balanced chunk of code, so I now edit code, not text (if you get my drift).
As for the prefix notation, well, you /are/ used to that!:
finalStep( preCalc1( deeperCalc( x, y), deeperCalc2( y, z)));
Lisp is just being consistent in its syntax, which pays off big time in
higher-order programming (especially procedural macros).
kenny
--
Cells? Cello? Celtik?: http://www.common-lisp.net/project/cells/
Why Lisp? http://alu.cliki.net/RtL%20Highlight%20Film
Phlip <phli...@yahoo.com> wrote:
>> Recently, I researched using C++ for game programming and here is what
>> I found:
>As other industries using C++ - even for highly graphical, rich-content
>physics simulations - report fewer of these problems, the game programming
>culture itself might be to blame.
We have had enough memory management issues (both leaks as well as
corruption by access-after-free/delete) in our company.
They still want us to use C++, w/o gc. :-(
Of course they diminish by using "smart pointers" and similar devices,
but the performance advantages of C++ diminish alongside.
>[...]
>> The static type system locks you into a certain design, and you can't
>> *test* new ideas, when they come to you, without redesigning your
>> whole class hierarchy.
>Then don't use the static type system.
You won't really recommend C/C++ programmers to cast everything to
void*?
>[...]
>> C++ programs can not "think" of new code at run-time, and plug that
>> new code into themselves in compiled form. Not easily, anyway.
>So, uh, use the scripting layer?
Which is usually less efficient than calling (funcall (compile nil ...))
in Lisp (or even storing the result of (compile nil ...) into some
data structure and using it over and over).
>[...]
Kind regards,
Hannah.
Yes, one team wrote parts of some of their games in Lisp, and that one
example has been trumpeted to the rooftops ever since. Read their
'post-mortem' on Gamasutra and you'll find that the benefits were far
more ambiguous than you would think to listen to the Lisp boosters. To
put it briefly, it featured strongly in both the "what went right" and
"what went wrong" sections.
> what is it about Lisp,
> http://alu.cliki.net/RtL%20Highlight%20Film
That's more an example of what I'm saying than a refutation of it. Lots
of gushing about Lisp, but hardly a mention of any shipped projects...
- Gerry Quinn
Mark A. Gibbs <x_gib...@rogers.com_x> wrote:
>c++ references are non-owning references ("weak" references if you
>will). therefore, no management required, or even really logical.
>you are correct when you say that you do have to be concerned that the
>underlying data does not go out of scope before the reference. however,
>in practical code, all references are always valid. unless you perform
>some acrobatics (ie, bad code design) or are dealing with threading
>issues, references will naturally go out of scope before the "real"
>object. there is no "management". the only things you do to references
>is make them and use them. you can't copy them around really.
Or you're just doing a typo-like mistake, as in
class A {
public:
explicit A(const std::string& x) : x(x) {}
protected:
const std::string& x;
// ^ oops, intention was the same w/o the &
};
A* make_a()
{
return (new A("foo"));
// oops, temporary std::string gets destroyed before we return,
// the A object contains a dangling reference
}
int main()
{
std::auto_ptr<A> x(make_a());
// do something with x
}
Of course it was just a typo. Just a typo that can be horrendous to
debug.
>[...]
>> but the fact remains that passing by
>> reference/pointer invloves more thinking about memory management than
>> passing by value, which cannot possibly cause a memory leak (unless you
>> have something funky in a copy constructor).
>references cannot cause memory leaks.
^ add "alone"
No, but dangling references are easily built.
>[...]
>in good design, memory is freed by the same entity that allocated it.
>the use of pointers or references does not invalidate good design. it
>allows you to if you so choose, and sometimes that's a valid design
>decision. but if your memory management is scattered and out of control,
>that's your fault, not any language's.
So if you need to pass around things, like
A B
is created (doesn't exist yet)
creates x
is created
---- pass on x ->
now needs the x
is destroyed
lives on
you have to agree on a discipline of managing x, or you copy it over,
so A destroys its copy and B manages its own copy. The latter may not
always be viable, because things might rely on the identity of x.
If the pattern how x is transferred or not transferred, you can't just
say "ok, A creates x, B destroys it always".
>[...]
>the smart pointer included with my compiler std::auto_ptr<T> takes up
>exactly the same amout of memory as a bare pointer. copying it costs one
>4 byte stack push, one function call, one 4-byte memory access and one
>4-byte memory write. interleaved on a modern processor, those operations
>(according to my estimate) would cost the same a simple bare-pointer
>copy in practical usage (taking things like return-value optimization
>into account). in other words, there should be no measurable difference.
auto_ptr isn't a smart pointer. It's quite dumbass, not even viable
for containers at all.
>[...]
>besides, with a garbage collected-only language, you can't practically
>implement raii, and i think that's a serious flaw.
You don't need raii then, as most raii is about memory, and the rest
can be handled with good macro facilities (see Common Lisp's
with-open-file for an example) or higher order functions.
>[...]
Kind regards,
Hannah.
Gerry Quinn wrote:
> In article <87k6tf9...@yahoo.com>, neo_l...@yahoo.com says...
>
>>Hey
>>
>>Recently, I researched using C++ for game programming and here is what
>>I found:
>
>
> [snip]
>
> What is it about Lisp that makes novices fall in love with it?
Try it and see. I came to Lisp rather late (age 44, after 17 years of
programming, the prior eight years in C at home and Vax Basic/Cobol in
tall buildings) and it was a revelation. No pointers, no manual memory
management, no syntax, interactive -- basically, all the crap was gone
and nothing but the fun of programming was left.
They
> don't actually *produce* anything with it, mind you,...
You are right! This is because Lisp is usually discovered as a hobby
language, since almost no one uses it at work.
I was a kitchen-table developer looking for a better way to develop the
next generation of a line of educational software done originally in C,
so I ended up Actually Using(tm) Lisp. It scales nicely to the real
world, and as you might imagine with anything powerful, the bigger the
task the bigger the payoff.
Then a friend asked me to do a rather huge business app (clinical drug
trial management). The kind big companies spend $100m on. We produced
something vastky better than the current state of the art on $1m. Screen
shots (which give zero idea of the underlying complexity of the problem)
are here (starting after "win32 gui samples"):
http://www.tilton-technology.com/cellophane-precursor.html
but they insist on
> telling the world how superior it is to every other language.
yeah, but this is as much an act of charity to fellow programmers stuck
dancing to the tune of Java and C/C++ compilers as it is an act of
obnoxiousness. :)
>
> Dude, write a good game in Lisp...
He cannot write a game until I finish my groovy (+ Lisp OpenGL
constraints physics-engine) development system:
http://www.tilton-technology.com/cellophane.html
I developed the Light Panel just to help me figure out what various
OpenGL parameters actually did. 3d rocks! I have now incorporated OpenAL
as well.
> Of course they diminish by using "smart pointers" and similar devices,
> >Then don't use the static type system.
>
> You won't really recommend C/C++ programmers to cast everything to
> void*?
Hmmm. Smart pointers and void pointers. Where have I heard of these before,
to give C++ dynamic typing? Hmmm...
> Which is usually less efficient than calling (funcall (compile nil ...))
> in Lisp (or even storing the result of (compile nil ...) into some
> data structure and using it over and over).
All big applications need a scripting layer, an efficient engine layer, and
back-end layers in assembler. There is no /a-priori/ way to guess where the
cutoff between them is, or what language to use in the scripting layer.
<a beat>
;-)
--
Phlip
http://industrialxp.org/community/bin/view/Main/TestFirstUserInterfaces
Maahes <maa...@internode.on.net> wrote:
>[...]
>In this system, you have a set of chunks (blocks of contiguous memory). Game
>objects reside in a particular chunk and allocates come out of that chunks.
>Frees do nothing are not used.
>When a game is finished with a chunk (ie. you exit the current level) you
>through out the chunk completely and all memory is lost.
Sounds like what MLkit with Regions does *automatically* (i.e. it
analyzes the code for code regions in which a specific set of objects
is created and after which all of that set is definitely dead).
It does so for SML, which is defined as a language with automatic
memory management (and usually implemented with a GC, but that's not
mandated, and there was a version of the ML kit which relied *only*
on region analysis, i.e. no GC at run time, but no "free" instructions
in the source code either; since there are cases where that's
disadvantageous, they now use a combination of regions and GC IIRC).
>[...]
Kind regards,
Hannah.
Phlip <phli...@yahoo.com> wrote:
>Hannah Schroeter wrote:
>> Of course they diminish by using "smart pointers" and similar devices,
>> >Then don't use the static type system.
>> You won't really recommend C/C++ programmers to cast everything to
>> void*?
>Hmmm. Smart pointers and void pointers. Where have I heard of these before,
>to give C++ dynamic typing? Hmmm...
Would only work if you "reparent" everything (except most basic
types) to a class Object, so you can have shared_ptr<Object> everywhere.
(with void*, the deletion after the last reference drops might not
call the needed destructors).
Yeah. Cool.
Especially as you then have to build wrappers around any standard
library class you might use "dynamically", kinda like
class String : public std::string, public Object {
};
>> Which is usually less efficient than calling (funcall (compile nil ...))
>> in Lisp (or even storing the result of (compile nil ...) into some
>> data structure and using it over and over).
>All big applications need a scripting layer, an efficient engine layer, and
>back-end layers in assembler. There is no /a-priori/ way to guess where the
>cutoff between them is, or what language to use in the scripting layer.
But... There are a few choices where the scripting layer could be
just the same as the engine implementation layer.
><a beat>
>;-)
Kind regards,
Hannah.
Gerry Quinn wrote:
>>what is it about Lisp,
>>http://alu.cliki.net/RtL%20Highlight%20Film
>
>
> That's more an example of what I'm saying than a refutation of it. Lots
> of gushing about Lisp, but hardly a mention of any shipped projects...
If you click on the name next to each "gush" you can read the full story
behind it. While I jokingly called them "Sound bites to die for", as one
who made the shift from C to Lisp I can assure you that what look like
gushes are actually simple, sober statements of fact. Lisp is that much
better.
As you can imagine, every author programs in all the usual languages as
well as Lisp. "no projects" is simply a measure of Lisp's current
miniscule (but growing) mindshare. The proof is that successful projects
such as paul graham's on-line store software got translated from Lisp to
(Java? C++? I forget) after Yahoo acquired it. I just learned another
successful product developed in Lisp got translated to java, probably to
be more marketable (I have not heard why, just that the java version is
/less/ capable than the Lisp version.)
That's actually open to some controversy. I don't know whether it's
really accurate or not, but according to some of the people and
Digital Research, Tim didn't really write much new at all: he just
ported CP/M to the x86. A fair amount of CP/M was written in PL/M (a
PL/I variant) and source was (they claim) routinely distributed to
system integrators and such. There was a PL/M86 compiler (though I'm
not sure exactly when it became available), so a simple re-compile at
least sounds like a semi-plausible possibility. I never saw the source
code to it myself, so I can't really comment on how portable it was or
how much rewriting a port would have entailed.
If true, this would mean that the first versions of MS-DOS were
written in PL/M86, but it's only fair to point out that the people
making the claims had a fairly clear interest in those claims being
believed.
In any case, I would be exceptionally surprised if those versions of
MS-DOS were written in C++ or even C.
--
Later,
Jerry.
The universe is a figment of its own imagination.
> Most GC algorithms can (in theory, at least) provide an upper bound on
> how much time the GC will take.
Sure. It's easier to spot how often the GC's going to be run with
refcounting though.
> A GC that is specifically tuned
> to the problem area (allocate known-to-live-short things from a
> separate arena and at a suitable time, scrap the whole arena) can be
> very *very* fast, but requires a problem set that is well-suited for
> taht sort of thing (any per-frame-allocated data would be a good
> starting point, from a game programming POV).
>
Right, this is the kind of thing I had in mind in one of my other posts.
I'm also fond of the region inference stuff mentioned in another post,
would like to play around with building something similar sometime.
> Alas, I don't have time to write games right at the moment, I'm busy
> writing traffic analysis tools in Common Lisp. I'll be crunching,
> aggregating and analysing somewhere around 1.5-7.5 GB per hour, is the
> thought. Not quite there, but the latest optimization cut the run-time
> by a factor of about 4.4. I'm almost at a opint where I can do 1.5 GB/h
> in real-time.
>
Fun. Remind me to mention my latest hacks to you on #afp sometime (though
I'll be AFK most of tonight).
you couldn't write an OS for the 8086 in C before atrocities like near,
far, huge, etc were invented, and that came later
hs
> Or you're just doing a typo-like mistake, as in
>
> class A {
> public:
> explicit A(const std::string& x) : x(x) {}
>
> protected:
> const std::string& x;
> // ^ oops, intention was the same w/o the &
> };
>
> A* make_a()
> {
> return (new A("foo"));
> // oops, temporary std::string gets destroyed before we return,
> // the A object contains a dangling reference
> }
>
> int main()
> {
> std::auto_ptr<A> x(make_a());
> // do something with x
> }
>
> Of course it was just a typo. Just a typo that can be horrendous to
> debug.
you could just as easily create nightmare typo scenarios in any
language, including plain english. that example is plainly a careless
mistake. no one and no language can promise that some idiot won't come
along and do something stupid to muck everything up.
incidently, make_a() should probably return an auto_ptr<A>.
>>references cannot cause memory leaks.
>
> ^ add "alone"
>
> No, but dangling references are easily built.
oh yes, but by following some simple guidelines they can be just as
easily avoided. i honestly cannot remember the last time i had a
dangling reference crop up in my code.
>>in good design, memory is freed by the same entity that allocated it.
>>the use of pointers or references does not invalidate good design. it
>>allows you to if you so choose, and sometimes that's a valid design
>>decision. but if your memory management is scattered and out of control,
>>that's your fault, not any language's.
>
>
> So if you need to pass around things, like
>
> A B
> is created (doesn't exist yet)
> creates x
>
> is created
> ---- pass on x ->
> now needs the x
> is destroyed
> lives on
>
> you have to agree on a discipline of managing x, or you copy it over,
> so A destroys its copy and B manages its own copy. The latter may not
> always be viable, because things might rely on the identity of x.
i said good design. this is horrendous design.
to put it in concrete terms:
Oven Furnace
is built (doesn't exist yet)
contains it's own thermostat
is created
----- give thermostat to furnace --->
now needs the thermostat
is destroyed
lives on
(with the oven's thermostat)
does that make logical sense?
furnace would be free to put the thermostat into any kind of state that
may be invalid within the context of oven, or vice-versa (you set the
oven to 275° and your house ignites).
but let's say for argument's sake that you did have to have a shared
object and that you could not be sure of it's lifetime.
std::tr1::shared_ptr (aka boost::shared_ptr).
> If the pattern how x is transferred or not transferred, you can't just
> say "ok, A creates x, B destroys it always".
again, shared_ptr. but c++ is more suited to a more deterministic
problem domain. vaguely allocating and tossing memory around with no
concept of ownership or responsibility isn't a good idea in tighter
architectures, or even in more expansive architectures where you want
close control of resources. if memory is cheap and plentiful, then sure,
but then c++ may not be the best tool for the job.
> auto_ptr isn't a smart pointer. It's quite dumbass, not even viable
> for containers at all.
of course auto_ptr is a smart pointer, it's just a smart pointer with a
very specific problem domain. it's specifically for the purpose of
transfer of ownership (as i mention above, c++ tends towards more
deterministic programs).
"smart pointer" is a family of solutions, not a specific solution.
boost::scoped_ptr has an even narrower problem domain, but i still find
it occasionally useful - although auto_ptr does the same job and more at
about the same cost.
>>besides, with a garbage collected-only language, you can't practically
>>implement raii, and i think that's a serious flaw.
>
>
> You don't need raii then, as most raii is about memory, and the rest
> can be handled with good macro facilities (see Common Lisp's
> with-open-file for an example) or higher order functions.
memory is only a very, very small part of raii. the first "r" is
resource - and resource can be anything at all. in fact, in my code, i
probably use it most often for synchronization.
no matter what happens, i can be assured that all resources will be
properly released. that is, for any possible code path, including those
that i cannot predict, and including all exceptional code paths (short
of an absolute emergency (ie. crash)), the resource will be properly
cleaned up. and i can know *exactly* when that will happen.
that is not trivial to do, no matter what macros or other tricks you
use. i don't know lisp that well, but i have a hard time seeing this
pattern being easily implemented in any garbage collected environment.
the only way you can do it in java is if you use try-finally blocks
everywhere, and that's just grotesque.
indi
The reason why you have a hard time seeing why this pattern can be
done trivially in Lisp is because you do not know anything about Lisp.
RAII is a hack. Let me say that again: RAII is a hack, it is based
mostly on the semantics of local variables in C++, which are
dynamic-extent. If you want dynamic-extent resources, then use an
operator which grants you the ability to do so. And if you store the
resource object in a more permanent location, then what guarantee do
you have of when or where in the code it will be deallocated?
In Common Lisp, the operator that can be used to de-allocate
dynamic-extent resources is called UNWIND-PROTECT. It guarantees that
a certain block of code will be executed (for side-effect only) when
the stack unwinds past that point for any reason.
It might be used as such:
(let ((stream (open "file")))
(unwind-protect (do-something stream)
(close stream)))
but this is not quite correct, due to the possibility of interrupt.
Better yet is:
(let ((stream nil))
(unwind-protect
(progn (setf stream (open "file"))
(do-something stream))
(when stream (close stream))))
And perhaps even more improvements can be applied. But all this DOES
NOT MATTER TO THE TYPICAL PROGRAMMER. Why? Because of the Lisp macro!
The typical programmer writes this, and does not worry about the details:
(with-open-file (stream "file")
(do-something stream))
Done. Resource acquired, resource deallocated. No extra clutter, no
fancy finalizers, no scattering of unwind-protects through the code.
You know EXACTLY when the resource is acquired and when it is
returned. Common Lisp libraries always provide this kind of interface
to the user for resources.
In addition, if you don't like macros, you can still do it with just a
higher-order function:
(call-with-open-file "file" (lambda (stream) (do-something stream)))
(Though this isn't standard, it is easy to write.)
RAII is a hack based on the semantics of variables in C++ and the
memory management scheme. You should not need these things in order
to have dynamic-extent resources.
--
;; Matthew Danish -- user: mrd domain: cmu.edu
;; OpenPGP public key: C24B6010 on keyring.debian.org
> But I think it may be too recursive & bottom-up programming for most brains
> to want to deal with...
Your code may be, but my Lisp code isn't.
Bottom-up is useful when you're trying to build a system that has
certain behaviors and then gradually link up those behaviors into a
program as you figure out what the program should do. E.g., first build
the physics engine and then build the game logic once the game designers
have figured out how the gameplay should be.
--
Rahul Jain
rj...@nyct.net
Professional Software Developer, Amateur Quantum Mechanicist
> In a C or C++ program, when you declare a local variable, what you get
> is called a "lexically scoped, dynamic extent" variable.
> In a Common Lisp, Scheme, Smalltalk, Ruby, ML, or Haskell (among
> others) program, when you declare a local variable, what you get is a
> "lexically scoped, indefinite extent" variable.
Careful, remember that variables have scope, but not extent. Objects
have extent. Just nitpicking on the terminology because the concepts are
already conflated in the average C/C++ programmer's mind.
> no matter what happens, i can be assured that all resources will be
> properly released. that is, for any possible code path, including those
> that i cannot predict, and including all exceptional code paths (short
> of an absolute emergency (ie. crash)),
In that case, it's the OS's job to clean up the resources your process
had allocated, IMHO.
> the resource will be properly
> cleaned up. and i can know *exactly* when that will happen.
>
> that is not trivial to do, no matter what macros or other tricks you
> use. i don't know lisp that well, but i have a hard time seeing this
> pattern being easily implemented in any garbage collected environment.
> the only way you can do it in java is if you use try-finally blocks
> everywhere, and that's just grotesque.
You use unwind-protect, which is similar to Java's try-finally, but you
wrap it in a macro that allocates the external resource and deallocates
it when the stack is unwound, placing the body of your macro's
invocation inside the body of the unwind-protect. For example,
with-open-file takes effectively 3 arguments: the variable to be bound
with the stream of the newly opened file, the parameters that will be
passed to the open function, and the body of code that uses that file's
stream. A very common pattern (which is what macros are all about,
automating and abbreviating mechanistic patterns).
> All big applications need a scripting layer, an efficient engine layer, and
> back-end layers in assembler. There is no /a-priori/ way to guess where the
> cutoff between them is, or what language to use in the scripting layer.
That is a perfect argument for using Lisp, since your scripting layer is
simply the use of the operators defined in the engine layer. And, of
course, you can use your Lisp implementation's way of doing assembly
routines (using lisp macros :) to micro-optimize the really
performance-sensitive bits.
> many of these issues may exist, but are not that big of a deal really, and
> there is the problem that higher-level languages often have a poor c
> interface and typically need interface stuff to be written to access c code
> (and thus most system api's, along with parts of the project likely being
> written in c).
Good thing he wasn't talking about the higher-level languages you often
see, then. :)
> Neo-LISPer wrote:
>
>> Alternatively, you can't execute a small portion of the program
>> without compiling and linking the whole thing, then bringing your game
>> into a specific state where your portion of the code is being executed.
>
> Yes I can. I can write a module that runs one or two functions from the
> project as the whole program.
And where does the state of the game come from in that module?
> reference counting is severely flawed as GC systems go - the only real
> advantage being the predictable performance
Predictably slow, maybe...
> Sure. It's easier to spot how often the GC's going to be run with
> refcounting though.
GC will potentially be done with refcounting when you decrement a
refcount. GC will potentially be done with good GC algorithms when you
allocate an object. How is one easier to spot than the other?
> if you change the *interface*, then yes, you have to recompile
> everything. i don't know lisp, but i cannot believe that the same is not
> true there.
The details of the interface can be determined by the compiler instead
of the programmer having to insert the details everywhere and then
change those details everywhere when they change on one end of the
system. The plethora of trivially different implementations of many
methods is a symptom of this problem. Sometimes templates help, but
sometimes they're just plain unnecessary baggage for the system to deal
with, because they require re-analysis of all the functions that might
be invoked for this one specific case.
with compiled languages there is a lot of variation in ffi quality.
Bindings have extent. But I didn't want to introduce yet another
piece of terminology.
> Neo-LISPer wrote:
>
>> The static type system locks you into a certain design, and you can't
>> *test* new ideas, when they come to you, without redesigning your
>> whole class hierarchy.
>
> I can't comment on this one way or the other, as I only have a few
> months experience with Lisp. My guess is that the implicit interface
> will prove similarly difficult to modify with Lisp as the static type
> system does with C++.
>
> Any eXtreme Programming Lispers or C++-ers care to comment?
His point was not about the sum total of work needed. He specifically
emphasized "*test*". If you want to change something in your design in
C++, you need to propagate that change everywhere. In Lisp, you just
change the code and load it and play around with the bits you've
changed. When you want to play around with other parts, you start
changing them. If it looks like this is a bad idea, you can just revert
back to what's in source control and figure out a different way.
>> I don't know what Tim Paterson used when he wrote the original QDOS in
>> 6 weeks in 1980, but I very much doubt that it was C, which I think
>> was still a rather experimental thing used by the strange cult of
>> unixists at that time.
>
> That's actually open to some controversy. I don't know whether it's
> really accurate or not, but according to some of the people and
> Digital Research, Tim didn't really write much new at all: he just
> ported CP/M to the x86. A fair amount of CP/M was written in PL/M (a
> PL/I variant) and source was (they claim) routinely distributed to
> system integrators and such.
There are lots of versions of this story floating around, but in
any way there's a lot of truth to the joke that Windows 95 was
really CP/M 95 ;-)
> In any case, I would be exceptionally surprised if those versions of
> MS-DOS were written in C++ or even C.
No doubt. The first time I even heard of C might have been as late
as in 1984(*), a friend of mine told me he had discovered this cool new
programming language C that was derived from BCPL :-) (1984 also
was the year of the first Macintosh, which had Pascal-centric
libraries).
(*) probably partly because I had just briefly heard of unix at that time,
I was used to TOPS-10/20 and VMS.
--
(espen)
> No doubt. The first time I even heard of C might have been as late
> as in 1984(*), a friend of mine told me he had discovered this cool new
> programming language C that was derived from BCPL :-) (1984 also
> was the year of the first Macintosh, which had Pascal-centric
> libraries).
>
> (*) probably partly because I had just briefly heard of unix at that time,
> I was used to TOPS-10/20 and VMS.
Indeed C is much older and was primarly developed to get a fast higher-level
language for the development of unix- :)
--
To get my real email adress, remove the two onkas
--
Hendrik Belitz
- Abort, Retry, Fthagn? -