Yes.
> Those "unrecoverable errors" are probably the things that
> assertion checking weeds out during development time rather than being an
> application for exception machinery. "Unrecoverable error": throw up a
> dialog for the developer and exit (or crash to keep the call stack in view
> in the debugger). Exceptions, are more appropriately used where there is a
> likelihood of recovering from the error condition rather than being just for
> "unrecoverable errors".
C++ unfortunately lacks any concept exceptions that won't be caught by ordinary
catch(...). Therefore, in current C++ you should preferentially not use
exceptions for unrecoverable failures, not to speak of unrecoverable errors!
Just log and terminate.
> Futher discussion of the application of exceptions
> is appreciated. Keep in mind that while there are SOME general vague
> "principles" of the application of exceptions, mostly the scenarios need to
> be defined for fruitful discussion. Please keep the examples simple but
> realistic and note the assumptions you make.
Hm, I failed to see the question in there, sorry.
Cheers & hth.,
- Alf
Why?
I thought only the code in the catch blocks have to be exception safe.
Might be new exception misconception ;)
[snip]
>> Futher discussion of the application of exceptions is
>> appreciated.
>
> I prefer error results to exceptions. So all I would write
> about exceptions would start with �If I was forced to use a
> language that would force me to use exceptions ...�
>
> The applications of error results is as follows:
>
This should have probably been "switch" instead of "if" :
> if( attempt( to_do_something ))
> { case 0: /* ok, continue */ ... break;
> case 1: /* oops, handle */ ... break;
> ... }
>
Back to return value vs exceptions.
1)
What happens if you can not handle the error in this method?
Or in the method that called this method?
Or in the method that called the method that called the method where the
error happened?
etc
2) How do you handle errors in constructors?
3) Simple example : how would you create a method (or a function) that
reads a value (some random integer) from a file and returns that value?
The above is false. Exception-safe code is needed to write code
that avoids resource leaks in the face of an exception.
For instance:
{
char *p = new char[256];
f();
}
If f throws an exception, this statement block is abandoned, and the
allocated memory is leaked. Of course other kinds of resources can be
leaked, like handles to operating system resources such as open files.
Leaking a resource is not automatically incorrect. Firstly, it does not
violate any C++ language rule. Allocating memory and losing the pointer
does not trigger undefined behavior. Whether it is an issue or not
depends on the precise situation in which it occurs. Many C++ programs
leak resources by design simply because they lose references to objects
when terminating by returning via main. This doesn't matter if the
operating system cleans up the resources after a program that
terminated. Minor resource leaks in short-lived programs are often not a
problem.
Leaking resource is not necessarily a security problem, either. Firstly,
there isn't any way it can grant an attacker the ability to execute
arbitrary code. At worst, a leak can be exploited to launch a
denial-of-service attack. To exploit a leak which occurs during an
exception, the attacker has to figure out how to get the program to
throw that exception, and moreover, to do it over and over again to
exhaust the available resources. If the exception is fatal, it doesn't
matter; the attacker gets to tickle it once, and the program is gone.
The exception has to be used as an indication of some non-fatal situation
that the attacker can create at will. E.g. suppose that a password
entry prompt uses an exception to indicate that the password was not
accepted. The attacker then simply has to keep trying incorrect
passwords to repeat the leak.
> Back to return value vs exceptions.
>
> 1)
> What happens if you can not handle the error in this method?
> Or in the method that called this method?
> Or in the method that called the method that called the method where the
> error happened?
> etc
Then you have to keep returning an indication to the caller. The easiest
way to do this is to consistently adhere to a single convention for
error indication throughout the entire program.
A good example of this technique is the Linux kernel.
> 2) How do you handle errors in constructors?
You set a ``successfully constructed'' flag in the object which is
tested after construction, through some public member function.
> 3) Simple example : how would you create a method (or a function) that
> reads a value (some random integer) from a file and returns that value?
There are a few ways. One is to store the integer via a pointer or
reference parameter and return the success/error indication.
Someone in c.l.c++.m said something like what is in quotes above (pretty
much exactly). I got the thus-far-presented misconceptions from that NG.
That there were no question marks in the entire OP means there were no
questions in it. For your reference, this is what a question mark looks
like: ?
;)
That there were no question marks in the entire OP means there were no
>Yes.
>> Those "unrecoverable errors" are probably the things that
>> assertion checking weeds out during development time rather than being an
>> application for exception machinery. "Unrecoverable error": throw up a
>> dialog for the developer and exit (or crash to keep the call stack in view
>> in the debugger). Exceptions, are more appropriately used where there is a
>> likelihood of recovering from the error condition rather than being just for
>> "unrecoverable errors".
>
>C++ unfortunately lacks any concept exceptions that won't be caught by ordinary
>
>catch(...). Therefore, in current C++ you should preferentially not use
>exceptions for unrecoverable failures, not to speak of unrecoverable errors!
>Just log and terminate.
In Java, there is a separation betwen exceptions and errors.
Exceptions could be used on a very fine grained level, such
as numeric string parsing, and in case there is a non digit
present, you simply catch the exception and replace the
result with default value. Works like a champ.
In fact, you MUST use try/catch block to do these conversion.
Otherwise, your code won't even compile.
With error, you are dead. You can not even recover from them.
>> Futher discussion of the application of exceptions
>> is appreciated. Keep in mind that while there are SOME general vague
>> "principles" of the application of exceptions, mostly the scenarios need to
>> be defined for fruitful discussion. Please keep the examples simple but
>> realistic and note the assumptions you make.
>Hm, I failed to see the question in there, sorry.
Some people just feel they need to dominate, like THEIR word
is "the word of God". They tend to fail to comprehend that
THEIR view is just that, THEIR view. Nothing more than that.
>Cheers & hth.,
>- Alf
--
Programmer's Goldmine collections:
Tens of thousands of code examples and expert discussions on
C++, MFC, VC, ATL, STL, templates, Java, Python, Javascript,
organized by major topics of language, tools, methods, techniques.
Impressive.
>;)
Correct.
Almost everything you do will generate some kind of exception,
at least in modern languages, and it is good it does.
> but
> only a minority of C++ programmers does so or even is aware
> of it.
>
>>Those "unrecoverable errors" are probably the things that
>>assertion checking weeds out during development time rather
>>than being an application for exception machinery.
>
> These are distinct concepts for me: An unrecoverable error
> can occur when a program needs 10 Bytes of allocated
> storage, but this storage is not available. The program can
> not generate more storage, so the error is not recoverable,
> but it also can not be weeded out during developement time.
>
>>Futher discussion of the application of exceptions is
>>appreciated.
>
> I prefer error results to exceptions. So all I would write
> about exceptions would start with �If I was forced to use a
> language that would force me to use exceptions ...�
>
> The applications of error results is as follows:
>
>if( attempt( to_do_something ))
>{ case 0: /* ok, continue */ ... break;
> case 1: /* oops, handle */ ... break;
> ... }
Well, there are two styles of coding.
One is: as soon as you can return the correct result,
return it.
If you ever get to code that was past the end of all
your good returns, everything you are dealing with
from then on is errors.
The second style is just the opposite:
Handle all errors first. When you handled ALL of them,
and I mean ALL, then you are home free.
These two approaches define your logic eventually.
It is quite a different logic.
With 2nd approach, you tend to get a more robust code
because it forces you to think about all sorts of problems,
you may not pay attention to otherwise.
The code in your catch block is not necessarily exception safe.
It all depends what are you going to do to recover from the
original exception. You MAY get yet another exception when
trying to recover from the original one.
In some places in my code, I have nested exceptions.
Acutlly in quite a few MOST important places.
That is why this thing runs like a tank.
You can just turn the power switch on in the middle of the
most important and critical and time consuming operation.
>[snip]
>>> Futher discussion of the application of exceptions is
>>> appreciated.
>>
>> I prefer error results to exceptions. So all I would write
>> about exceptions would start with �If I was forced to use a
>> language that would force me to use exceptions ...�
>>
>> The applications of error results is as follows:
>>
>
>This should have probably been "switch" instead of "if" :
>
>> if( attempt( to_do_something ))
>> { case 0: /* ok, continue */ ... break;
>> case 1: /* oops, handle */ ... break;
>> ... }
>Back to return value vs exceptions.
>
>1)
>What happens if you can not handle the error in this method?
Correct, and it is not such an uncommon thing.
Because, first of all, the error is local.
Your routine does not necessarily know what is the most
logical thing to do on the HIGHER levels.
So, what is the solution then?
If you just return an error code, then the higher level
code has to examine it, and do different things, which simply
translates in additional and unnecessary complications in
your program logic, a "spaghetti effect".
You may have 5 levels deep stack, and on EACH level, you
have to test every single return code. Else your program
is incorrect.
So, you tend to forever deal with small and insignificant
things in the scheme of things, all of which, regardless
of how many levels deep on the stack it happened, could
be handled in bulk, with a single exception at the
appropriate higer level, that gives you sufficient
granularity to make a LOGICAL decision on what can be
done to recover the operation as such, and not some low
level funk, which is nothing more than a logic noise.
I am trying to use as little of logic as possible.
My slogan is: take it easy on your logic,
or you are bound to end up with grand headache,
no matter from what standpoint you look at it.
It will simply make your code more difficult to read.
How many decisions does you mind have to make, if it
forever worries about whether some return code was
handled correctly, of was handled in the right place
or what kinds of problems it might cause to the higher
level code.
Program logic = load on the mind.
>Or in the method that called this method?
>Or in the method that called the method that called the method where the
>error happened?
>etc
>
>2) How do you handle errors in constructors?
You don't. Because you can not construct the object.
Your program is incorrect, and I doubt you can recover
from this kind of thing in principle in most cases.
>3) Simple example : how would you create a method (or a function) that
>reads a value (some random integer) from a file and returns that value?
--
Maybe, maybe not. It probably depends on what whoever is
saying it means by "unrecoverable". (But I've never heard
anyone say it, so I don't know the context.)
> Whereas the exception machinery in C++ was developed primarily
> to handle and RECOVER from more errors more elegantly than was
> possible without exceptions, the statement is highly suspect.
It depends on the error. At the lowest level, where a simple
retry suffices, exceptions complicate error handling.
> Those "unrecoverable errors" are probably the things that
> assertion checking weeds out during development time rather
> than being an application for exception machinery.
If that's what is meant by "unrecoverable", then exceptions are
certainly not for unrecoverable errors. In such cases, it's
important (in most applications---there are exceptions) to kill
the process as soon as possible, without any stack walkback.
> "Unrecoverable error": throw up a dialog for the developer and
> exit (or crash to keep the call stack in view in the
> debugger).
Exactly. And that's not what exceptions do.
> Exceptions, are more appropriately used where there is a
> likelihood of recovering from the error condition rather than
> being just for "unrecoverable errors".
Again, it depends on what you mean by "recovering". If you just
have to try again (e.g. user input error), then a simple while
loop is a lot simpler than an exception. Exceptions are useful
when you can't really do anything about the error locally.
Which often means that "recovery" is more or less impossible,
since once you've left "locally", you've lost all of the
necessary context to retry.
But maybe you mean something different by "recovering"?
> Futher discussion of the application of exceptions is
> appreciated. Keep in mind that while there are SOME general
> vague "principles" of the application of exceptions, mostly
> the scenarios need to be defined for fruitful discussion.
> Please keep the examples simple but realistic and note the
> assumptions you make.
Concrete example:
//! \pre
//! No current output set up...
void
SomeClass::setupOutput()
{
assert(! myOutput.is_open());
std::string filename = getFilename(); // Dialog with user...
myOutput.open(filename.c_str());
while (! myOutput.is_open()) {
reportErrorToUser();
filename = getFilename();
myOutput.open(filename.c_str());
}
}
Use of exceptions here would only make the code more
complicated. (Try writing it in Java sometime:-). Where
failure to open a file is reported via an exception.)
--
James Kanze
> More elegantly? Actually, for correct and secure C++ code,
> all functions need to be written to be »exception safe«, but
> only a minority of C++ programmers does so or even is aware
> of it.
Just a nit, but the same holds true in any language which
supports exceptions. All code must be written with the fact
that some (or all, in the case of languages like Java) functions
can return via an exception, rather than the normal path.
> >Those "unrecoverable errors" are probably the things that
> >assertion checking weeds out during development time rather
> >than being an application for exception machinery.
> These are distinct concepts for me: An unrecoverable error
> can occur when a program needs 10 Bytes of allocated
> storage, but this storage is not available. The program can
> not generate more storage, so the error is not recoverable,
> but it also can not be weeded out during developement time.
That's a particular definition of "unrecoverable". Is it the
one the original poster meant? I don't know.
Also, you know as well as I do that your example is simplified,
and only speaks of the general case. There are cases where you
attempt to allocate a large amount of memory, but can use a fall
back strategy in case of failure. I'd certainly be very upset
if, on trying to output data to a (non-essential) log, I got a
bad_alloc exception (especially if the logging was taking place
in a destructor, called because I'm unwinding the stack because
of another exception).
> >Futher discussion of the application of exceptions is
> >appreciated.
> I prefer error results to exceptions. So all I would write
> about exceptions would start with »If I was forced to use a
> language that would force me to use exceptions ...«
I prefer to have a number of different tools in my toolbox, and
to use which ever one is most effective for the job at hand.
--
James Kanze
> [snip]
> > More elegantly? Actually, for correct and secure C++ code,
> > all functions need to be written to be exception safe ,
> > but only a minority of C++ programmers does so or even is
> > aware of it.
> Why?
Because your program doesn't work right otherwise.
Exceptions introduce an additional flow path into your code.
Fundamentally, all "exception safety" really means is ensuring
that the code is correct when this flow path is executed.
Practically, it's a bit more complicated than that, because the
number of additional flow paths soon makes the older,
traditional ways of reasonning about program correction
unmanageable; when C++ programmers speak of exception safety,
they are generally referring to the more recently developed
techniques to make this problem tractable. Effective use of
destructors, for example---I'm not sure you can write correct
code with exceptions unless the language has deterministic
destructors.
> [snip]
> >> Futher discussion of the application of exceptions is
> >> appreciated.
> > I prefer error results to exceptions. So all I would write
> > about exceptions would start with If I was forced to use a
> > language that would force me to use exceptions ...
> > The applications of error results is as follows:
> This should have probably been "switch" instead of "if" :
> > if( attempt( to_do_something ))
> > { case 0: /* ok, continue */ ... break;
> > case 1: /* oops, handle */ ... break;
> > ... }
> Back to return value vs exceptions.
> 1)
> What happens if you can not handle the error in this method?
Then you might prefer an exception.
> Or in the method that called this method?
> Or in the method that called the method that called the method where the
> error happened?
> etc
The further up the stack you go, the more exceptions are called
for. It's important to realize, however, that the alternative
flow paths are still there, and that your code has to be correct
if they're taken.
> 2) How do you handle errors in constructors?
Or overloaded operators, or any other function which can't
return a value, or has its return type imposed.
The case of constructors is particular, however, in that an
exception coming from a constructor means that you don't have an
instance of the object at all. Which means that you don't have
to worry about an invalid instance floating around. This often
justifies exceptions even in cases where you wouldn't normally
use them.
> 3) Simple example : how would you create a method (or a
> function) that reads a value (some random integer) from a file
> and returns that value?
Fallible<int> readInt( std::istream& input );
Although in this case, it's even simpler, because istream
maintains an error state, which should be set if you can't read
the value.
--
James Kanze
> > Stefan Ram wrote:
> > [snip]
> >> More elegantly? Actually, for correct and secure C++
> >> code, all functions need to be written to be »exception
> >> safe«, but only a minority of C++ programmers does so or
> >> even is aware of it.
> > Why?
> The above is false. Exception-safe code is needed to write
> code that avoids resource leaks in the face of an exception.
It's need to write code that is correct in the face of an
exception. What "correct" means depends on the application
specifications. (On the other hand, what he wrote is false in
so far that if the function calls no other function, or only
calls functions guaranteed not to throw, it doesn't have to be
exception safe.)
> For instance:
> {
> char *p = new char[256];
> f();
> }
> If f throws an exception, this statement block is abandoned,
> and the allocated memory is leaked.
Or not. If the application is using garbage collection, it's
not leaked. If the application immediately terminates (and is
in a hosted environment), it's not leaked.
> Of course other kinds of resources can be leaked, like handles
> to operating system resources such as open files.
And there can be other issues besides leaking. In the end,
you've got to ensure internal consistency for all possible
control flows. When some of the possible control flows are the
result of an exception, then this requirement is called
exception safety.
> Leaking a resource is not automatically incorrect. Firstly, it
> does not violate any C++ language rule. Allocating memory and
> losing the pointer does not trigger undefined behavior.
> Whether it is an issue or not depends on the precise situation
> in which it occurs. Many C++ programs leak resources by
> design simply because they lose references to objects when
> terminating by returning via main.
Or by calling exit. (Calling exit does not unwind the stack,
which means that destructors of local variables are not called.)
> This doesn't matter if the operating system cleans up the
> resources after a program that terminated. Minor resource
> leaks in short-lived programs are often not a problem.
More correctly, if the resource will be cleaned up by the
system, and you return to the system, it's not been leaked.
It's only a leak if you don't return to the system, or if it is
something which won't be cleaned up by the system. (Neither
Unix nor Windows clean up temporary files, for example.)
[...]
> > 2) How do you handle errors in constructors?
> You set a ``successfully constructed'' flag in the object
> which is tested after construction, through some public member
> function.
And has to be asserted in every member function, so that you
don't accidentally use an invalid object. (There are cases
where this is valid, e.g. input or output. But they're not the
majority.)
> > 3) Simple example : how would you create a method (or a
> > function) that reads a value (some random integer) from a
> > file and returns that value?
> There are a few ways. One is to store the integer via a
> pointer or reference parameter and return the success/error
> indication.
Yes. There are better solutions in C++ (at least in certain
cases). But people were writing robust, correct code even
before there were exceptions.
--
James Kanze
[...]
> > More elegantly? Actually, for correct and secure C++ code,
> > all functions need to be written to be »exception safe«,
> Correct.
> Almost everything you do will generate some kind of exception,
> at least in modern languages, and it is good it does.
In order to write exception safe code, at least with value
semantics (and I suspect even without value semantics, but I've
not analysed the question in detail), it's been proven that you
need a certain number of primitives which are guaranteed not to
throw---one of the most classical C++ idioms, for example, only
works if you have a version of swap that doesn't throw.
--
James Kanze
>Maybe, maybe not. It probably depends on what whoever is
>saying it means by "unrecoverable". (But I've never heard
>anyone say it, so I don't know the context.)
I like that one!
You do have a style and insight more than most people I saw.
That is why whenever I end up seing some articles while verifying
sites, and there is thread I am looking on, and there is article
from James Kanze, THAT is the article I am going to read first.
That is pretty much guaranteed.
To tell you the truth, I do not have even a concept of
"unrecoverable" error. Unrecoverable errors occur ONLY if my
box dies more or less.
In that case, exceptions or mental masturbation about them,
means as much as a dead mosquito fart. We are screwed.
But, as soon as I reboot, ALL I need to do "to recover",
is to push a SINGLE button. Since even the operation and ALL
its parameters will be automatically reloaded. Cause it is
all as persistant as it gets.
Not a big deal. I lost a couple of minutes of my time
and had a chance to go take a leak, instead of being glued
to the screen. What a pleasure!
:--}
>> Whereas the exception machinery in C++ was developed primarily
>> to handle and RECOVER from more errors more elegantly than was
>> possible without exceptions, the statement is highly suspect.
>
>It depends on the error. At the lowest level, where a simple
>retry suffices, exceptions complicate error handling.
I do not agree. Sorry.
You DO have this tendency...
:--}
Ok, how do you retry a network connection?
You sit in some dumb loop and keep doing the same stoopid
thing, and you might have to do several things during
the connection process depending on what kind of protocols
the sever supports, and whether you need to do authentication
and what kinds of authentication methods sever supports,
or whether sever returns tens of codes, some of which may
be temporary off-line conditions to be retried.
Some may be permanent errors.
Some may be resource issues, and you name it.
So, to me, I just give a dead flying chicken what kind of
error server might have. There is only 2 level granularity
of exceptions.
I do not check ANY return codes more or less.
I can be thrown on two levels only, at most 3.
And ALL that funky logic does not have to be there.
Yes, I understand that your program does need logic.
Otherwise it is just a dumb state machine, even though
the very logic is already wired in into that state machine.
After all, they do execute conditional jumps depending
on which input bit is active.
>> Those "unrecoverable errors" are probably the things that
>> assertion checking weeds out during development time rather
>> than being an application for exception machinery.
>
>If that's what is meant by "unrecoverable", then exceptions are
>certainly not for unrecoverable errors. In such cases, it's
>important (in most applications---there are exceptions) to kill
>the process as soon as possible, without any stack walkback.
>
>> "Unrecoverable error": throw up a dialog for the developer and
>> exit (or crash to keep the call stack in view in the
>> debugger).
>
>Exactly. And that's not what exceptions do.
>
>> Exceptions, are more appropriately used where there is a
>> likelihood of recovering from the error condition rather than
>> being just for "unrecoverable errors".
>
>Again, it depends on what you mean by "recovering". If you just
>have to try again (e.g. user input error), then a simple while
>loop is a lot simpler than an exception.
:--}
I like THAT one. Nice style!
> Exceptions are useful
>when you can't really do anything about the error locally.
Not necessarily.
Again the string to number conversion exceptions.
Can be handled as locally as it gets.
And plenty of other things.
I don't think there is such a rule for it.
>Which often means that "recovery" is more or less impossible,
>since once you've left "locally", you've lost all of the
>necessary context to retry.
Not really.
You did loose the context withing the same stack frame.
Not a big deal.
Just make sure you handle that exception and clean up
some memory, even in Java you don't even have to worrry
even about doing that much.
Some local operation could be easily retried by the higher
levels in vast majority of situations, unless you read some
obviously wrong configuration parameter and it is not
going to work no matter how many times you retry it.
In that case, you throw on a higher level, after displaying
or logging the lower level, more precise and more detailed
result.
After that, you catch it on higher levels, and those levels
know much better what the whole operation is all about.
If they decide that, sorry, this can not be done even on
this level, they can rethrow even rougher generality
expception to the highest level loop.
At that point, your highest level logic knows what to do.
Either to wait for some time and try to retry, say in cases
where you lost a connection, which should be restored by
the O/S in a relatively short time.
Or, to simply abandon one of your major operations
and go on with the next item on the job list, if you have
anything like this, and THAT is where things become most
tricky.
>But maybe you mean something different by "recovering"?
>
>> Futher discussion of the application of exceptions is
>> appreciated. Keep in mind that while there are SOME general
>> vague "principles" of the application of exceptions, mostly
>> the scenarios need to be defined for fruitful discussion.
>> Please keep the examples simple but realistic and note the
>> assumptions you make.
>
>Concrete example:
>
> //! \pre
> //! No current output set up...
> void
> SomeClass::setupOutput()
> {
> assert(! myOutput.is_open());
Jesus!
> std::string filename = getFilename(); // Dialog with user...
> myOutput.open(filename.c_str());
> while (! myOutput.is_open()) {
> reportErrorToUser();
> filename = getFilename();
> myOutput.open(filename.c_str());
> }
> }
>
>Use of exceptions here would only make the code more
>complicated. (Try writing it in Java sometime:-). Where
>failure to open a file is reported via an exception.)
>
--
That is why I like to read your stuff.
You always seem to invent something kinky.
These are one of those rare times when I have to stop reading
all this jazz and start looking at it.
What a pleasure!
:--}
Most of other stuff I just scroll by.
>> [snip]
>> >> Futher discussion of the application of exceptions is
>> >> appreciated.
>
>> > I prefer error results to exceptions. So all I would write
>> > about exceptions would start with If I was forced to use a
>> > language that would force me to use exceptions ...
>
>> > The applications of error results is as follows:
>
>> This should have probably been "switch" instead of "if" :
>
>> > if( attempt( to_do_something ))
>> > { case 0: /* ok, continue */ ... break;
>> > case 1: /* oops, handle */ ... break;
>> > ... }
>
>> Back to return value vs exceptions.
>
>> 1)
>> What happens if you can not handle the error in this method?
>
>Then you might prefer an exception.
>
>> Or in the method that called this method?
>> Or in the method that called the method that called the method where the
>> error happened?
>> etc
>
>The further up the stack you go, the more exceptions are called
>for. It's important to realize, however, that the alternative
>flow paths are still there, and that your code has to be correct
>if they're taken.
Deep.
>> 2) How do you handle errors in constructors?
>
>Or overloaded operators, or any other function which can't
>return a value, or has its return type imposed.
>
>The case of constructors is particular, however, in that an
>exception coming from a constructor means that you don't have an
>instance of the object at all. Which means that you don't have
>to worry about an invalid instance floating around. This often
>justifies exceptions even in cases where you wouldn't normally
>use them.
Cool argument.
>> 3) Simple example : how would you create a method (or a
>> function) that reads a value (some random integer) from a file
>> and returns that value?
>
> Fallible<int> readInt( std::istream& input );
>
>Although in this case, it's even simpler, because istream
>maintains an error state, which should be set if you can't read
>the value.
>
--
Cool. I like that. You ARE pretty inventive I'd say.
:--}
Ok, I forgot about that. Thank you.
But how many people are adding what exception can a method throw in it's
declaration? (forgot how it is called. Is it exception declaration?)
All normal coding styles are telling that is causing more trouble then
it helps.
>> For instance:
>
>> {
>> char *p = new char[256];
>> f();
>> }
>
>> If f throws an exception, this statement block is abandoned,
>> and the allocated memory is leaked.
>
> Or not. If the application is using garbage collection, it's
> not leaked. If the application immediately terminates (and is
> in a hosted environment), it's not leaked.
>
Not many people are using GC :P
Is there a function to terminate a process, but to unwind the stack?
I recall seeing some articles on gc issue in C++ a while back
on this very group. Do not remember what it was.
But what can I say out of my own experience with Java is this:
A well written GC is such a help, that I can not even see the
argument of not implementing it to this day.
I bet you can not even imagine how much times was wasted to date
by the programmers to deal with this totally stoopid issues
related to memory deallocation. Probably at least 30% of total
time spend on developing the C++ code.
C++ would probably be benefited tremendously if it adopted some
of the central Java concept, such as GC, threads and GUI.
Except that would require an equivalent of a virtual machine
underneath.
And that is one of central issues with Java.
I even had a long argument AGAINST JVM a while back
on the basis of program fat. JVM takes at least 10-20 megs
of memory to even run a program. But, by todays standards,
it is not even worth mentioning since almost any modern
program takes hundreds of megs of memory to run, and the
JVM load time is negligeable compared to run time of any
more or less complex application.
So, there you have it...
Yes. It is called clean exit.
:--}
> >>>> Stefan Ram wrote:
> >>>> [snip]
> >>>>> More elegantly? Actually, for correct and secure C++
> >>>>> code, all functions need to be written to be »exception
> >>>>> safe«, but only a minority of C++ programmers does so or
> >>>>> even is aware of it.
> >>>> Why?
> >>> The above is false. Exception-safe code is needed to write
> >>> code that avoids resource leaks in the face of an exception.
> >> It's need to write code that is correct in the face of an
> >> exception. What "correct" means depends on the application
> >> specifications. (On the other hand, what he wrote is false
> >> in so far that if the function calls no other function, or
> >> only calls functions guaranteed not to throw, it doesn't
> >> have to be exception safe.)
> >Ok, I forgot about that. Thank you.
> >But how many people are adding what exception can a method
> >throw in it's declaration? (forgot how it is called. Is it
> >exception declaration?) All normal coding styles are telling
> >that is causing more trouble then it helps.
First, many people do use "throw()" when a function cannot
throw. But that's beside the point: when you're writing code,
you do have to know whether the function can throw or not;
whether this information is provided by a language construct or
the documentation is really irrelevant.
> >>> For instance:
> >>> {
> >>> char *p = new char[256];
> >>> f();
> >>> }
> >>> If f throws an exception, this statement block is
> >>> abandoned, and the allocated memory is leaked.
> >> Or not. If the application is using garbage collection,
> >> it's not leaked. If the application immediately terminates
> >> (and is in a hosted environment), it's not leaked.
> >Not many people are using GC :P
> I recall seeing some articles on gc issue in C++ a while back
> on this very group. Do not remember what it was.
> But what can I say out of my own experience with Java is this:
> A well written GC is such a help, that I can not even see the
> argument of not implementing it to this day.
I agree, but you can't loose sight of the fact that Java and C++
are two different languages. Garbage collection is nice in C++:
it definitely makes it easier to write correct code in a lot of
cases, and it is essential for safety (no dangling pointers) in
others. But it isn't nearly as essential in C++ as in Java,
because C++ supports value semantics. Which mean that you don't
use nearly as many dynamically allocated objects, and most of
the ones you do use have deterministic lifetimes anyway. (Of
course, the safety issue still stands.)
> I bet you can not even imagine how much times was wasted to
> date by the programmers to deal with this totally stoopid
> issues related to memory deallocation. Probably at least 30%
> of total time spend on developing the C++ code.
Generally less than 10% of the time. But given the cost of
competent engineers, even 10% is worth it.
> C++ would probably be benefited tremendously if it adopted
> some of the central Java concept, such as GC, threads and GUI.
> Except that would require an equivalent of a virtual machine
> underneath.
Not at all. I've used garbage collection in C++, and I've used
C++ in multithreaded programs. And a lot of people have written
GUI's in C++. All without a virtual machine.
All would be nice additions, IMHO, although for various
reasons, I don't think a standard GUI would be possible today.
--
James Kanze
No, but it's easy to implement. Just wrap all of the code in
main in a try block, e.g.:
int
main( int argc, char** argv )
{
try {
// all of the code...
return EXIT_SUCCESS;
} catch ( int returnCode ) {
return returnCode;
}
}
That's what I usually do, anyway.
--
James Kanze
Nope. That doesn't unwind the stack. Test is here:
#include <iostream>
#include <cstdlib>
struct A
{
A(){std::cout<<"A()"<<std::endl;}
~A(){std::cout<<"~A()"<<std::endl;}
};
int main()
{
A a;
exit(0);
}
Well, I use throw() as a matter of practice.
In YOUR language, it becomes a part of "program logic".
In MY language, it becomes a part of your defense system
against ANY kinds of most unpleasant errors.
Even if I open a file that does not throw, and I detect
an error opening it, I may throw() myself. Because I know
the higher levels have a catch that catches ANY kind of funk,
and it KNOWS what is the logical meaining of ANY ot those
throws in bulk is.
On lower level, I may not even be able to formulate a logical
error. Because I do not even know who the hell is doing what
while they are calling me to deal with this file.
So, what is the point of me reporting: "error: could not open file"?
Not much. Because you could not open it doing what?
You might be opening tens of files, and some of them may
be functionalli similar. But what is the CONTEXT?
You see?
So, this IS the case where your throws and exception mechanism
becomes a direct part of your program logic, and I mean HIGHER
level logic than simply ifs and buts of some routine.
>> >>> For instance:
>> >>> {
>> >>> char *p =3D new char[256];
>> >>> f();
>> >>> }
>
>> >>> If f throws an exception, this statement block is
>> >>> abandoned, and the allocated memory is leaked.
>
>> >> Or not. If the application is using garbage collection,
>> >> it's not leaked. If the application immediately terminates
>> >> (and is in a hosted environment), it's not leaked.
>
>> >Not many people are using GC :P
>
>> I recall seeing some articles on gc issue in C++ a while back
>> on this very group. Do not remember what it was.
>
>> But what can I say out of my own experience with Java is this:
>> A well written GC is such a help, that I can not even see the
>> argument of not implementing it to this day.
>
>I agree,
Cool. So GET TO WORK, you high tower priests!
:--}
> but you can't loose sight of the fact that Java and C++
>are two different languages.
So what?
> Garbage collection is nice in C++:
>it definitely makes it easier to write correct code in a lot of
>cases, and it is essential for safety (no dangling pointers) in
>others.
Cool, so wire it into language on a level of a spec
and STANDARD functionality.
> But it isn't nearly as essential in C++ as in Java,
Wut?
>because C++ supports value semantics. Which mean that you don't
>use nearly as many dynamically allocated objects, and most of
>the ones you do use have deterministic lifetimes anyway. (Of
>course, the safety issue still stands.)
Well, to tell you the truth, I do like java in this respect
better.
There is no such things as -> in java.
Everything is bla.bla.blah.
So, if you need to extend your code, you don't have to worrry
about replacing the -> notation to . notation in some instances,
which may take hours for you if you change your objects and
create some super objects than include those, etc.
Secondly, I don't have to worry about "value semantics".
I could care less if it exists. I don't have to add THIS
level of ganularity, which only creates more complications
at the end.
Just look at some "improvements" in C++, like references?
What the funk are those?
Nobody can even agree on how some compilers interpret it.
You can not assume ANY semantics if you use references.
I did not watch the C++ development for years now.
But I just have an intuitive feeling that what you guys
are doing is everything you can to kill the language.
The amount of complexities you introduce and the amount
of visible, tangible benefits they produce as a bottom
line is just WAY too lil bang for a buck.
Jeeez. I bet they are going to jump at me in bulk now!
:--}
I think language is moving in a totally dead end direction.
Again, look at the experience of Java.
Do you think, in your clear mind that it was just a waste?
That the whole Java thing is nothing more than a bad joke?
I'd think REALLY hard before answering this question.
And what do YOU guys do?
Well, just IGNORE it all, like it is some "terrorist" camp,
like some "evil" Saddam.
Instead of learning from it and appreciating that grand
piece of work called Java.
What do you learn from Php or Python, or even stinky
Javascript, as screwed up as it it?
ANYTHING?
Well, you, hight tower priests, could care less those
languages exist, because to you, "purists", those are
not really languages. Those are just toys for infantiles.
Meanwhile, more and more of the most significant development
is being implemented every single day using the Php, Python,
Ruby, Javascript, SQL or even that HTML, NONE of which has
the kind of portability problems C++ has.
Do you see ANYTHING?
Or you are blind, sitting in that fortress of yours,
in that high tower of yours, hoarding the "secrets"
of the "craft"?
I just bet you, the significance of C++ is going to be diminished
to the point, where it will become an atrophied organ in the
body of modern world, and information processing specifically.
And information processing is probably the most critical
aspects of the entire mankind's development and survival.
Because things need to happen URGENTLY now and at a rate
never seen before in the entire history of mankind.
By the time those monsters, aka politicians, keep hoarding
some of their "private interests" and making some deal
as far as environmental situations goes, that are simply
insane, we need the information machine to work as smoothly
and as efficiently as we can manage.
And that is YOUR OBLIGATION. It is not longer a luxury,
or some game you play when you have nothing better to do.
So, the VAST majority of programming nowadays better be
directed towards information processing.
TOTAL portability is a MUST from now on.
I do not want to hear the type of arguments I keep hearing.
Not that it matters to me personally more than a dead
mosquito fart.
I want to see C++, if it EVER has a chance, to run
on ANY platform and I want to see a GUI wired into the
language. Sure, you don't have an equivalent of JVM,
so you can not rely on things it provides you.
Well, but WHO prevents you from having something of
that kind or even going as far as starting out with
using the existing JVM? After all, how was C++
initially implemented?
Ask Strousrup.
And I did, and he lied.
At Ruben Engineering, Cambridge, Mass. USA.
Because I asked him this:
"why did you implement C++ as a preprocessor to C?"
And he said:
Nope. It is NOT a preprocessor.
And it is a lie.
Because it was literally a fancy preporocessor.
Because after the 1st phase of "compilation",
it simply ran the C compiler.
Ask him.
And what about those "objects" and how the first versions
of C++ were linked?
Well, the stuff was PATCHED in your executable,
just to make C++ behavior out of standard C exectutables.
So...
Why don't you hack something up that can help to save
this dinasaur called C++?
Ok, enough for now.
Cya.
>> I bet you can not even imagine how much times was wasted to
>> date by the programmers to deal with this totally stoopid
>> issues related to memory deallocation. Probably at least 30%
>> of total time spend on developing the C++ code.
>
>Generally less than 10% of the time. But given the cost of
>competent engineers, even 10% is worth it.
>
>> C++ would probably be benefited tremendously if it adopted
>> some of the central Java concept, such as GC, threads and GUI.
>
>> Except that would require an equivalent of a virtual machine
>> underneath.
>
>Not at all. I've used garbage collection in C++, and I've used
>C++ in multithreaded programs. And a lot of people have written
>GUI's in C++. All without a virtual machine.
>
>All would be nice additions, IMHO, although for various
>reasons, I don't think a standard GUI would be possible today.
>
--
Right; for instance failing to unlock a mutex, or roll
back a transaction, aren't resource leaks.
I use exception everywhere instead of error code return from function.
It is easier to program if you get rid of statements like
if(f()==error){ return error or handle error }
Also I have very few try's only in places for top level cleanup...
Exceptions are mechanism to make code cleaner...and to get rid of
error codes as return values from functions..
Greets
hm , why would you do this?
isnt't that
{
vector<char> p(256);
f();
}
is simpler?
You can always get &p[0] and use it like buffer and you get
size for free....
>
> If f throws an exception, this statement block is abandoned, and the
> allocated memory is leaked. Of course other kinds of resources can be
> leaked, like handles to operating system resources such as open files.
>
I use vectors ....strings and such stuff.
Greets
GC is heavy performance killer especially on multiprocessor systems
in combination with threads....it is slow, complex and inefficient...
It has sense in functional languages because of recursion, but
in java?!
GC without threads, but processes and shared memory instead, is ok on
multiprocessor systems...
>
> Except that would require an equivalent of a virtual machine
> underneath.
virtual machine is also heavy performance killer...
>
> And that is one of central issues with Java.
Yes.
I think java is designed in such way that it will still be slow in
comparison to other compiled languages...if it is compiled
language.
Greets
To demonstrate one way in which code fails to be exception safe.
> isnt't that
> {
> vector<char> p(256);
> f();
> }
> is simpler?
This code no longer demonstrates a resource leak in the face of an exception,
and so it would not have made a sutitable example to accompany my article.
Doh?
Is not.
> in combination with threads....it is slow, complex and inefficient...
Religious belief.
> It has sense in functional languages because of recursion, but
> in java?!
Java has recursion. Moreover, there are functional languages built on the Java
platform.
Recursion is not directly connected to the need for garbage collection; this is
some kind of strange misconception.
> GC without threads, but processes and shared memory instead, is ok on
> multiprocessor systems...
GC is very efficient from an SMP point of view, because it allows
for immutable objects to be truly immutable, over most of their
lifetime. No book-keeping operations have to be performed on objects
that are just passed around throughout the program (such as bumping
refcounts up and down).
No storage reclamation strategy is free of overhead. Even if your program
correctly manages memory by itself with explicit new and delete, there is a
cost.
Moreover, a poorly implemented heap allocator (like virtually
every default malloc implementation out there in the real world)
is an SMP performance killer.
Like malloc and new, GC can be badly implemented in a way that kills SMP
performance. So can any aspect a programming language implementation, including
the compiler.
If you find a sufficiently bad compiler and library for some language, you can
``prove'' any statement about how bad that language is.
>> Except that would require an equivalent of a virtual machine
>> underneath.
>
> virtual machine is also heavy performance killer...
That may be so, but GC does not require a virtual machine. GC has been used for
five decades, with natively compiled code, starting in the late 1950's on IBM
704 mainframes, as part of the Lisp run-time support.
>>
>> And that is one of central issues with Java.
>
> Yes.
> I think java is designed in such way that it will still be slow in
> comparison to other compiled languages...if it is compiled
> language.
Unfortunately, that is wishful thinking. Natively compiled Java has very little
disadvantage compared to C and C++. (Mainly in areas of doing low-level things
with memory, or directly interfacing with the ``bare iron''; the sorts
of things that are poorly supported in Java).
Java has the same low-level numeric types, and the way it deals with objects is
not much different from pointers to classes in C++.
There is no reason to expect something like a matrix multiplication with
Java arrays to be slow, when Java is compiled to native code by an optimizing
compiler.
Hm, explain to me how can any thread, access or change any pointer in
memory without lock while gc is collecting....
There is no way for gc to collect without stopping all threads
without locking.... because gc is just another thread(s) in itself...
>
>> in combination with threads....it is slow, complex and inefficient...
>
> Religious belief.
Of course. GC is complex program that has only one purpose.
To let programmer not write free(p), but programmer
still has to write close(fd).
What's the purpose of that?
> Recursion is not directly connected to the need for garbage collection; this is
> some kind of strange misconception.
>
>> GC without threads, but processes and shared memory instead, is ok on
>> multiprocessor systems...
>
> GC is very efficient from an SMP point of view, because it allows
> for immutable objects to be truly immutable, over most of their
> lifetime. No book-keeping operations have to be performed on objects
> that are just passed around throughout the program (such as bumping
> refcounts up and down).
Refcounts are negligible in comparison to what gc is doing.
GC cannot be efficient since it cannot access program
memory while program is working....ad it doesn;t know what program is
doing... therefore GC can perform well only if it collect
when absolutely necessary. And when it had to collect performance
became catastrophic...
>
> No storage reclamation strategy is free of overhead. Even if your program
> correctly manages memory by itself with explicit new and delete, there is a
> cost.
Manual memory deallocation is simple, fast and efficient. Nothing
so complex like GC.
Cost of new and delete is nothing in comparison to GC.
>
> Moreover, a poorly implemented heap allocator (like virtually
> every default malloc implementation out there in the real world)
> is an SMP performance killer.
Yeah right. And GC thread is black magic , right?
>
> Like malloc and new, GC can be badly implemented in a way that kills SMP
> performance. So can any aspect a programming language implementation, including
> the compiler.
GC cannot be implemented efficiently since it has to mess with
memory...while other threads are working.
Add that compacting collector which have to update all references
in program and some copy/pasting... that can be optimised... to be
faster than new/delete , yeah right ;)
> That may be so, but GC does not require a virtual machine. GC has been used for
> five decades, with natively compiled code, starting in the late 1950's on IBM
> 704 mainframes, as part of the Lisp run-time support.
I used GC with C++...
>> I think java is designed in such way that it will still be slow in
>> comparison to other compiled languages...if it is compiled
>> language.
>
> Unfortunately, that is wishful thinking. Natively compiled Java has very little
> disadvantage compared to C and C++. (Mainly in areas of doing low-level things
> with memory, or directly interfacing with the ``bare iron''; the sorts
> of things that are poorly supported in Java).
I don;t want to discuss this, but it is obvious that nothing in java is
designed with performance in mind. Quite opposite....
Greets
>>> C++ would probably be benefited tremendously if it adopted some
>>> of the central Java concept, such as GC, threads and GUI.
>> GC is heavy performance killer especially on multiprocessor systems
No idea what you are talking about.
>Is not.
>> in combination with threads....it is slow, complex and inefficient...
Where are you pulling THIS stuff from?
>Religious belief.
I'd say so.
>> It has sense in functional languages because of recursion, but
>> in java?!
No idea what does recursion has to do with it.
>Java has recursion. Moreover, there are functional languages built on the Java
>platform.
>Recursion is not directly connected to the need for garbage collection; this is
>some kind of strange misconception.
I would think recursion has 0 impact on GC from what I see.
Correct.
>Java has the same low-level numeric types, and the way it deals with
>objects is not much different from pointers to classes in C++.
Indeed. And once you deal with objects, ALL the "advantages"
in performance of C++, or even C, are gone, and you are on the
same level, if not better, depending on what kind of algorithms
are implemented to handle things like collections, such as list,
tree, hash map, map, etc.
In real programs, you don't deal with bits and bytes and register
values. You deal with OBJECTS. Once you deal with objects and do
just about any operation on them of any significance, all your
"performance" gains are gone. Non existant.
>There is no reason to expect something like a matrix multiplication with
>Java arrays to be slow, when Java is compiled to native code by an optimizing
>compiler.
Interesting point people seem to miss is that overall performance,
people seem to miss is that the overall advantage of having a JVM
is shielding you from all sorts of nasty problems when you
have to deal directly with O/S and provides you a portability
mechanism, so if you decide to wire in the GUI, threads, gc or
whatever, you don't have to worry about how it is going to run
on a different platform.
The cost of JVM in terms of memory footprint is negligible
by today's standards. Probably 10 - 15 % of your total memory
requirements for more or less complex apps that are useful
enough to even bother about.
Another point about performance is if your language has a rich
set and enough of expressive power so you do not incure that
much performance overhead. Overall impact on your performance
I'd estimate to be no more than 10-20%, probably in the worst case.
I'd be curious to see the benchmarks.
You incure performance overhead when you do lil things on a
very low level, poking into memory or what have you.
You'd have to substantiate your claims with some hard data,
and even there, it highly depends on what kind of things
your app does.
I my case, I have not noticed ANY issues with performance.
I am processing at least 1000 of articles per second, and
even that is not the limit because of my overly defensive
coding style for reading file. I bet I can double that
performance if I spend a couple of days, if not couple of
hours on it. It think at least 30 to 50% is quite realistic.
The way I check it, is to see how fast my app is processing
files by simply looking at the rate at which my drive light
flashes. In a perfect app, it would stay solid red with
very little flashing. That means it achieved a PERFECT
performance or theoretical limit, bound by disk i/o.
In my case, it does not. So, in my case, I am processing the
stuff at a rate equivalent to about 6 megs per second in
terms of disk access speed. Considering the amount of
processing going on under the hood, I'd like to see YOUR
app doing better using C++.
And what is going on during the processing is this:
you read each text line, pass it to the article parser
engine. That one detects various article headers,
saves them in the object to generate an article object.
Once the article is parsed, the filtering engine kicks
in to filter the articles on anything you can imagine,
including the article headers and article body, a pretty
heavy duty filter.
THEN, two indexes are constructed on the fly, by message
ID and by article date stamp. A typical operation
processes at least 100k articles. So, just to construct
a sorted index on that many articles is quite a trip
by itself.
So, if you consider that overall speed of processing
is not far from direct disk read, such as in file copy,
then that is good enough for a poor guy like me.
I bet you can't do it faster in C++, no matter WHAT you do.
Prolly 10-20% is the best case.
But...
What I get in return is to be able to write the GUI code
with a single language, without worrying about all sorts
of graphics toolkits. I do not have to worry about threads.
I know it will run on any platform where you can install
JVM. I have a pretty rich set of language. Just collections
alone, out of the box is all I needed to date.
And I don't have to worry of someone's toolkit or library,
all of a sudden, will either becomes a closed source, or
stops being maintained and develop. And I don't have to
worry whether there IS such a toolkit on a different
platform of IDE environment or not.
I am not even using the Java preferred IDE, becase I don't
like any of them. Just code completion alone, is my MAJOR
concern, and I mean code completion from the moment you
type your first character, that is able to recognize what
are the valid symbols in the scope I am in.
I can crank out the code probably twice as fast as they
can on those IDEs. Well, I did not try to compete with
anyone, but just for the sake of arguments. Sure, IDE
alone is not enough to achive such results. But, combined
with code design and architecture issues, a typical
significant program update, where some more or less
major functionality added, take a couple of days on average.
I don't rememober a case where I had to spend more than
a week, and I mean thing so radical, that I had to modify
at least 10 source files and change all sorts of GUI
code in several panels.
And THAT is what I am after. An OVERALL performance of
development cycle, portability, the amount of worrying,
the clarity of syntax and language notation.
I do not even use generics. A matter of principle.
I personally think it is one of the worst ideas in Java
since the day one. Most of it is bluff that translates
in orders of magnitude increase in complexity in terms
of being able to quickly read your code and understand
what it does.
I work on my code by looking at some scren for a couple
of seconds and I can see what is going on there.
I do not need to spend half an hour staring at some
hieroghlyphs from Martians. I don't have that time.
My brain is not overloaded with all sorts of useless
bells and whistles, taking most of my brain processing
time just to understand those utterly unintuitive
constructs and tricks.
So far, never regretted it.
In other words, just about ALL I have to worry is a single
something, a single language, a single syntax, a single
set of tricks.
Just the fact of me being able to run my app on Windows
and linux without even recompiling it, as I did,
wins the argument hands down, no matter what kind of artument its.
Simple as that.
But that is for ME. Not for you.
For you, I can only wish good luck.
>>>> C++ would probably be benefited tremendously if it adopted some
>>>> of the central Java concept, such as GC, threads and GUI.
>>> GC is heavy performance killer especially on multiprocessor systems
>>
>> Is not.
>
>Hm, explain to me how can any thread, access or change any pointer in
>memory without lock while gc is collecting....
>There is no way for gc to collect without stopping all threads
>without locking.... because gc is just another thread(s) in itself...
To me, you are talking a kernel moder device driver talk.
You are saying things that may or may not happen in THEORY,
without considering the overall applicability of some concept
to the overall program performance, feature set, ease of use,
and on and on and on.
To tell you the truth, about the ONLY things I care about
as far as memory management goes is how much headache does
it produce for me for managing it and whether it works or not.
YOU can fiddle with those bits all you want.
Does not matter to me even a bit. I have no plans to rewrite
GC and can not even conceive such an idea.
All I know IT WORKS. And I can do it number of times faster
than you. And it still going to work. I don't even WANT
to hear about memory management issues as being ANY kind
of problem. We are WAY past that stage in the game.
Simple as that.
Now we have to solve the issues of having non dynamically
scoped languages being as portable as modern dynamically
scoped languages, such Php, the premiere language of web,
Javascript, Python, Ruby, SQL, and even HTML.
What seems to apply here is the issue of not seeing the
forest for the trees, more than anything else.
>>> in combination with threads....it is slow, complex and inefficient...
>> Religious belief.
>Of course. GC is complex program that has only one purpose.
>To let programmer not write free(p), but programmer
>still has to write close(fd).
>What's the purpose of that?
Because close is a LOGICAL operation that can not be performed
automatically unless the program terminates.
And free() is not, just as GC proves beyond all doubt.
>> Recursion is not directly connected to the need for garbage collection;
>> this is some kind of strange misconception.
>>
>>> GC without threads, but processes and shared memory instead, is ok on
>>> multiprocessor systems...
>>
>> GC is very efficient from an SMP point of view, because it allows
>> for immutable objects to be truly immutable, over most of their
>> lifetime. No book-keeping operations have to be performed on objects
>> that are just passed around throughout the program (such as bumping
>> refcounts up and down).
>
>Refcounts are negligible in comparison to what gc is doing.
>GC cannot be efficient since it cannot access program
>memory while program is working....ad it doesn;t know what program is
>doing... therefore GC can perform well only if it collect
>when absolutely necessary. And when it had to collect performance
>became catastrophic...
Sorry. But this is totally unproductive.
Cya.
--
Memory managment is not a problem. You can implement GC,
for any apllication even in assembler. Reference counting
is simple form of gc and works well in c++ because
of RAII.
Problem is that that in java you don;t have raii and
resource management you still have
to implement by reference counting, or you close files
immediately in same scope? Never return descriptors
or other resources tu user?
So you have to manually call addRef/releaseRef
to implement GC for averything that is not java object....
in java
All in all first I was first warmed with idea of GC,
it is not problem to have it , but then I tried
haskell, and didn;t have memory leaks,rather
"space leaks" ;)
And somebodey tried to convince me that conservative GC
is faster that shared_ptr/auto_ptr (what a ....;)
Greets
I guess what Branimir tried to tell was that you should always release
your ressources in a destructor. This gives you automatically the
basic exception guarantee.
Yes it is. A mutex held is also a ressource, and so is a transaction.
Both should be wrapped in a class having appropriate destructor
semantics.
/Peter
Except that in some cases your destructor is not called.
Plus, James Kanze can tell you more about non-trivial destructors.
:--}
There is no reference counting in Java as far as I know.
Not that it matters I guess...
>All in all first I was first warmed with idea of GC,
>it is not problem to have it , but then I tried
>haskell, and didn;t have memory leaks,rather
>"space leaks" ;)
>And somebodey tried to convince me that conservative GC
>is faster that shared_ptr/auto_ptr (what a ....;)
>
>
>Greets
>
>
--
Yup.
You have to unwind EVERYTHING, no matter how small it is.
Otherwise, sooner or later your box will run out of steam.
>/Peter
No. The destructor is always called if the process is not terminated.
/Peter
Back in the days when every article about Java began with a cheap shot
at C++, there was an article in The Java Report that asserted in its
opening paragraph that C++ could not have garbage collection because it
didn't run in a virtual machine. I think that sets a record for error
density. (The article, of course, had nothing to do with garbage
collection, nor with C++)
--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)
Well,look controlling memory allocation is crucial performance
feature of c++. You can write special allocators in every
class by just overloading new/delete. Performance gain on
modern hardware is where you allocate objects for
particular class, not in allocation in itself....
Because depending on memory layout and dispersion
of objects you can gain 2-10 times speed because
of cache and how you access objects.
Allocation is not where GC fails, rather deallocation....
Because there is no fastest and simpler way to perform collection,
than to stop program, perform collection in multiple threads, then let
program work....
I think it is clear that this concept fails in combination with
threads because they share same address space...
It can work alright with processes which don;t share address space.
Greets
Hm, you actually suggest that if a program does just a couple allocations up
front and keep all the objects, comparing the speed of the allocation make
sense?
Can you provide an application example where that java's allegedly superfast
allocation can be noticed?
You're missing the point. Comparisons of Java and C++ are supposed to
make Java look good. It doesn't matter whether they make sense so long
as they meet that goal. Hmm, could it be that Java proponents are all
Republicans?
SMP means nothing in comparison of performance gain you get
from CPU cache.
Immutable objects are really bad idea, for example all objects
in haskell are immutable. Array update is O(n) string
is implemented as linked list. That's why no one really uses
haskell deafult objects, rather we have fast mutable string fast m
utable array fast mutable this and that etc which are
actually structures implemented in c.
And no one actually programs function stile rather
payload code is wrapped in monads ;)
Object copy is expensive operation in nowadays hardware.
It is always much faster and chipper to perform update
or use copy on write and reference counted strings....
because they are cache friendly. Mutex lock/unlock is very cheap
operation.
Look, I tested quad xeon against home athlon dual core.
Initializing 256 meg of ram from 4 threads (each thread 64meg)
on quad xeon on higher cpu frequency against old dual athlon , athlon
performs better or same! Catch 22 is that I tried fastest athlon
and got same result as old athlon;) because the have same
speed of memory bus ;)
On intels before i7 architecture secret of performance
was not to miss cache much....
Greets
Good read.
>>I don;t want to discuss this, but it is obvious that nothing in java is
>>designed with performance in mind. Quite opposite....
This is just an insult and not only an insult to intelligence
of those, who designed the language, but total fiction.
> Java 1.6 (aka �Java 6�) is already one of the fastest languages:
>http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all
Yep, that is what I suspected.
> And Java 1.7 (aka �Java 7�) is reported to be even faster:
>
> �Java 5 <=== 18% faster=== < Java 6 < ===46% faster===< Java 7�
>
>http://www.taranfx.com/blog/java-7-whats-new-performance-benchmark-1-5-1-6-1-7
Cool. I like that. Helps me quite a bit.
> See also:
>
>http://www.stefankrause.net/wp/?p=9
>
>http://paulbuchheit.blogspot.com/2007/06/java-is-faster-than-c.html
>
>http://www.idiom.com/~zilla/Computer/javaCbenchmark.html
Yeh, right Im reading local os news groups and sysadmins always
ask how to tweak vm to perform faster...
Lot of complains about java software and performance on high
end hardware and SAN.....
>
>> Java 1.6 (aka �Java 6�) is already one of the fastest languages:
>
>> http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all
>
> Yep, that is what I suspected.
Yeah, that site is really good reference for language benchmarks ;)
Why don;t they test application with more then 100 lines of code ;)
Greets...
Does not mean anything to me.
Some people are obscessed beyond reason.
>Lot of complains about java software and performance on high
>end hardware and SAN.....
>
>>
>>> Java 1.6 (aka �Java 6�) is already one of the fastest languages:
>>
>>> http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all
>>
>> Yep, that is what I suspected.
>
>Yeah, that site is really good reference for language benchmarks ;)
>Why don;t they test application with more then 100 lines of code ;)
>
>Greets...
>
--
> Branimir Maksimovic <bm...@hotmail.com> writes:
>>Refcounts are negligible in comparison to what gc is doing.
>>GC cannot be efficient since it cannot access program
>
> »[A]llocation in modern JVMs is far faster than the best
> performing malloc implementations. The common code path
> for new Object() in HotSpot 1.4.2 and later is
> approximately 10 machine instructions (data provided by
> Sun; see Resources), whereas the best performing malloc
> implementations in C require on average between 60 and 100
> instructions per call (Detlefs, et. al.; see Resources).
> And allocation performance is not a trivial component of
> overall performance -- benchmarks show that many
> real-world C and C++ programs, such as Perl and
> Ghostscript, spend 20 to 30 percent of their total
> execution time in malloc and free -- far more than the
> allocation and garbage collection overhead of a healthy
> Java application (Zorn; see Resources).«
>
> http://www-128.ibm.com/developerworks/java/library/j-jtp09275.html?ca=dgr-jw22JavaUrbanLegends
What this kind of a bogus metric assumes is a direct 1:1 mapping between
heap allocation in Java, and heap allocation in C++.
This is an absurd comparison. In Java, everything gets allocated on the
heap. This is not true for C++, where quite a bit of stuff gets allocated on
the stack. In most cases, this requires exactly 0 machine instructions,
above the usual stack frame setup for a function call. In the worst case,
where an aggressive compiler minimizes stack usage by recycling stack space
for objects in non-overlapping scopes, 1 machine instruction per allocation
would be expended.
But in in Java, a lot more stuff gets allocated on the heap. Every string
literal constructs a java.lang.String object, for example:
write(1, "Foo\n", 4);
This results in no heap allocation in C++.
Even the C++-y way:
std::cout << "Foo" << std::endl;
That's still does not compile into any heap allocations, per se. Some might
occur as a result of invoking the appropriate methods of
std::ostream::operator<<, but the same would apply to internal
implementation of Java's println:
System.out.println("Foo");
But even before println gets invoked, it takes a java.lang.String as a
parameter. The literal string results in a java.lang.String getting
constructed on the heap, even before println() gets invoked (and it gets
garbage-collected at some future time).
I suppose that a very aggressive JVM may compile this down to JIT code and
furnish a special-case implementation for this type of a call to println()
that avoids a heap allocation. I don't know if any JVMs actually do that,
but I would be very surprised. Since all methods in java are virtual, this
severely limits the assumptions one could make for println() for some
arbitrary object that implements the Writer interface.
So, measuring the raw performance of Java's heap allocator directly against
some C++'s allocator is a meaningless comparison, since a typical Java
application would be hammering on the heap far more than an equivalent C++
application. But, having said that, it would certainly not hurt for common
C/C++ library implementations of heap allocations to borrow some tricks from
Java's. Some of the stuff it does would certainly work for C++ too.
Those are not up-to-date measurements.
These are up-to-date measurements -
http://shootout.alioth.debian.org/u32/which-programming-languages-are-fastest.php
http://shootout.alioth.debian.org/u32q/which-programming-languages-are-fastest.php
http://shootout.alioth.debian.org/u64/which-programming-languages-are-fastest.php
http://shootout.alioth.debian.org/u64q/which-programming-languages-are-fastest.php
http://shootout.alioth.debian.org/which-programming-language-is-fastest.php
It's too difficult to get anyone to read programs that are shorter
than 100 lines of code.
> GC is heavy performance killer especially on multiprocessor systems
> in combination with threads....it is slow, complex and inefficient...
Obviously, you've never actually measured. A lot depends on the
application, but typically, C++ with garbage collection runs
slightly faster than C++ without garbage collection. Especially
in a multi-threaded envirionment,
[...]
> > Except that would require an equivalent of a virtual machine
> > underneath.
> virtual machine is also heavy performance killer...
Which explains why some of the leading experts in optimization
claim that it is necessary for the best optimization. (I don't
fully buy that claim, but a virtual machine does have a couple
of advantages when it come to optimizing: it sees the actual
data being processed, for example, and the actual machine being
run on, and can optimize to both.)
> > And that is one of central issues with Java.
> Yes.
> I think java is designed in such way that it will still be slow in
> comparison to other compiled languages...if it is compiled
> language.
First, Java is a compiled language, and second, it's not slower
than any of the other compiled languages, globally. (Specific
programs may vary, of course.)
--
James Kanze
> > Is not.
> Hm, explain to me how can any thread, access or change any
> pointer in memory without lock while gc is collecting....
> There is no way for gc to collect without stopping all threads
> without locking.... because gc is just another thread(s) in
> itself...
Maybe. I've not actually studied the implementations in
detail. I've just measured actual time. And the result is that
over a wide variety of applications, garbage collection is, on
the average, slightly faster. (With some applications where it
is radically faster, and others where it is noticeably slower.)
> >> in combination with threads....it is slow, complex and
> >> inefficient...
> > Religious belief.
> Of course. GC is complex program that has only one purpose.
> To let programmer not write free(p), but programmer
> still has to write close(fd).
> What's the purpose of that?
Less lines of code to write.
If you're paid by the line, garbage collection is a bad thing.
Otherwise, it's a useful tool, to be used when appropriate.
> > GC is very efficient from an SMP point of view, because it
> > allows for immutable objects to be truly immutable, over
> > most of their lifetime. No book-keeping operations have to
> > be performed on objects that are just passed around
> > throughout the program (such as bumping refcounts up and
> > down).
> Refcounts are negligible in comparison to what gc is doing.
Reference counting is very expensive in a multithreaded
environment.
And in the end, measurements trump abstract claims.
[...]
> > No storage reclamation strategy is free of overhead. Even if
> > your program correctly manages memory by itself with
> > explicit new and delete, there is a cost.
> Manual memory deallocation is simple, fast and efficient.
> Nothing so complex like GC. Cost of new and delete is nothing
> in comparison to GC.
That's definitely not true in practice.
[...]
> GC cannot be implemented efficiently since it has to mess with
> memory...
What you mean is that you don't know how to implement it
efficiently. Nor do I, for that matter, but I'm willing to
accept that there are people who know more about the issues than
I do. And I've measured the results of their work.
[...]
> I don't want to discuss this, but it is obvious that nothing
> in java is designed with performance in mind. Quite
> opposite....
You don't want to discuss it, so you state some blatent lie, and
expect everyone to just accept it at face value. Some parts of
Java were definitely designed with performance in mind (e.g.
using int, instead of a class type). Others less so. But the
fact remains that with a good JVM, Java runs just as fast as C++
in most applications. Speed is not an argument against Java
(except for some specific programs), at least on machines which
have a good JVM.
--
James Kanze
[...]
> Memory managment is not a problem. You can implement GC, for
> any apllication even in assembler. Reference counting is
> simple form of gc and works well in c++ because of RAII.
Reference counting doesn't work in C++, because of cycles. And
reference counting is very, very slow compared to the better
garbage collector algorithms.
[...]
> And somebodey tried to convince me that conservative GC is
> faster that shared_ptr/auto_ptr (what a ....;)
And you refused to even look at actual measurements. I'm aware
of a couple of programs where the Boehm collector significantly
out performs boost::shared_ptr. (Of course, I'm also aware of
cases where it doesn't. There is no global perfect solution.)
--
James Kanze
> »[A]llocation in modern JVMs is far faster than the best
> performing malloc implementations. The common code path
> for new Object() in HotSpot 1.4.2 and later is
> approximately 10 machine instructions (data provided by
> Sun; see Resources), whereas the best performing malloc
> implementations in C require on average between 60 and 100
> instructions per call (Detlefs, et. al.; see Resources).
Although I agree with the final results (having made some actual
measurements), the wording above is very definitely
"advertising". It's a well known fact that *allocation* is very
fast in a copying garbage collector---even 10 instructions seems
like a lot. But this is partially offset by the cost of
collecting, and in early implementations (*not*, presumably
HotSpot) by the fact that each dereference involved an
additional layer of indirection.
> And allocation performance is not a trivial component of
> overall performance -- benchmarks show that many
> real-world C and C++ programs, such as Perl and
> Ghostscript, spend 20 to 30 percent of their total
> execution time in malloc and free -- far more than the
> allocation and garbage collection overhead of a healthy
> Java application (Zorn; see Resources).«
That's also a bit of advertising. I really wouldn't call an
interpreter a "typical" program. For that matter, I don't even
know if there are typical programs, C++ is used for so many
different things. (In numeric processing, for example, it's
quite possible for a program to run hours without a single
allocation.)
There's an old saying: don't trust any benchmark you didn't
falsify yourself. Garbage collection is a tool, like any other.
Sometimes (a lot of the time) it helps. Other times it doesn't.
If my experience is in any way typical (but it probably isn't),
it's impact on performance is generally negligeable, one way or
the other. It's essential for robustness (no dangling
pointers), but a lot of programs don't need that much
robustness. For the rest, it depends on the application, the
programmer, and who knows what other aspects. It's a shame that
it's not officially available, as part of the language, but I'd
also oppose any move to make it required.
--
James Kanze
[...]
> Allocation is not where GC fails, rather deallocation....
It doesn't fail there, either. But any comparison should take
deallocation into consideration. (Well, formally... there's no
deallocation with garbage collection. But the system must take
some steps to determine when memory can be reused.)
> Because there is no fastest and simpler way to perform
> collection, than to stop program, perform collection in
> multiple threads, then let program work....
Try Googleing for "incremental garbage collection".
--
James Kanze
> Deallocation matters for long-running programs.
> A programm that is running only a short time might
> never need to actually reclaim memory. Otherwise,
> I agree that this takes some time indeed.
It has to be considered when making comparisons, however...
In general, with most manual memory management schemes, total
time is proportional to the number of blocks allocated and
freed. With the most classical garbage collection algorithm
(mark and sweep), total time is proportional to the total amount
of memory in use when the garbage collector is run. If you're
allocating a lot of small, short lived blocks, then garbage
collection is faster. (It's not accident that the benchmarks
prepared by people favoring garbage collection tend to
manipulate very dynamic graph structures, where nodes are
constantly being allocated and freed.) When threading is
involved, significantly faster, since the typical malloc/free
will use a lock for each call. (There are, of course, faster
implementations of malloc/free available. The fastest I know of
for a multiple threaded environment in fact uses some of the
techniques of garbage collection, at least for memory which is
freed in a thread different from the one it was allocated in.)
--
James Kanze
> >> Deallocation matters for long-running programs.
> >> A programm that is running only a short time might
> >> never need to actually reclaim memory. Otherwise,
> >> I agree that this takes some time indeed.
> > Hm, you actually suggest that if a program does just a
> > couple allocations up front and keep all the objects,
> > comparing the speed of the allocation make sense?
> > Can you provide an application example where that java's
> > allegedly superfast allocation can be noticed?
> You're missing the point. Comparisons of Java and C++ are
> supposed to make Java look good. It doesn't matter whether
> they make sense so long as they meet that goal.
As I said before, never trust a benchmark you haven't falsified
yourself:-). On the other hand, why should a Java proponent
design a benchmark which makes his language look bad. And
since C++ doesn't have any vested interests ready to pay to make
it look good, and the others look bad, there aren't many C++
advocates designing benchmarks. (If someone's ready to pay me,
I'll design you a benchmark. Just tell me which language you
want to win, and it will. I know both languages well enough for
that.)
On the other hand, the fact that there are a large number of
Java applications which run and are sufficiently fast is more or
less a proof that performance isn't (always) a problem with the
language. (The fact that they almost all run on Intel
architectures may indicate that the JVM's available on other
systems aren't all that good.)
> Hmm, could it be that Java proponents are all Republicans?
They're not that bad. (At least, not all of them.) There's a
difference about not presenting the whole picture, and just
lying (see "if Stephan Hawkings had lived in Britain").
--
James Kanze
Yep. And the more high level some abstraction is,
the more performance it can gain and the less of even theoretical
advantage any other approach may claim.
>> > And that is one of central issues with Java.
>
>> Yes.
>> I think java is designed in such way that it will still be slow in
>> comparison to other compiled languages...if it is compiled
>> language.
>
>First, Java is a compiled language, and second, it's not slower
>than any of the other compiled languages, globally. (Specific
>programs may vary, of course.)
And that is exactly what I am seeing in my own situation.
I happened to have looked at some java source code and in those
places I looked at, I could not see a way of getting it to have
more performance.
These fools that those, who wrote java and various packages
are just lazy bums, while in reality those are the cream of the
crop programmers and these fools would probably have not a
chance to pass the interview at Sun, regardless of whether they
are C++ big mouths or what.
This is the CREAM OF THE CROP of nothing less then a Silicon Valley,
and most of all what these fools have to say is nothing more than
the sucking sounds.
How that can possibly be? GC kills all threads when it has
to collect? Or it can magically sweep through heap,stack,bss
etc and scan without locking all the time or stopping program?
Explain to me how?
Manual deallocation does not have to lock at all....
>
> [...]
>>> Except that would require an equivalent of a virtual machine
>>> underneath.
>
>> virtual machine is also heavy performance killer...
>
> Which explains why some of the leading experts in optimization
> claim that it is necessary for the best optimization. (I don't
> fully buy that claim, but a virtual machine does have a couple
> of advantages when it come to optimizing: it sees the actual
> data being processed, for example, and the actual machine being
> run on, and can optimize to both.)
Best optimization is when you can manually control memory management
and have access to hardware directly. Everything else
is algorithm optimization...which can be done in any language.
>
>>> And that is one of central issues with Java.
>
>> Yes.
>> I think java is designed in such way that it will still be slow in
>> comparison to other compiled languages...if it is compiled
>> language.
>
> First, Java is a compiled language, and second, it's not slower
> than any of the other compiled languages, globally. (Specific
> programs may vary, of course.)
Java is compiled language in a sense that any interpreted language
is run time compiled...but that does not makes those languages
compiled...
For example php with popen calling c executable in my experience
is about three times faster as a server than jetty/solr
for example...
Greets
Greets
I don't have to believe. It is pretty much self evident.
Why?
Well, because the higher level is your abstraction,
the less impact the language has. Because you have a virtual machine
underneeth that can do anything you please, and as efficiently
as anything else under the sun.
Basically, you are running a machine code at that level.
About the only thing you can claim is: well, but what are those
additional calls? Well, yep, there IS a theoretical overhead.
But once you start looking at the nasty details of it, it all
becomes pretty much a pipe dream.
There are ALL sorts of things that happen under the hood, and
in plenty of cases, your low level details become insignificant
in the scheme of things.
Simple as that.
>Greets
Virtual machines are always slower then real machines....
No matter what, one can optimize as far as it
goes just simple cases. Anything non trivial,
and very difficult to optimize like java code...
Greets
> [...]
>> And somebodey tried to convince me that conservative GC is
>> faster that shared_ptr/auto_ptr (what a ....;)
>
> And you refused to even look at actual measurements. I'm aware
> of a couple of programs where the Boehm collector significantly
> out performs boost::shared_ptr. (Of course, I'm also aware of
> cases where it doesn't. There is no global perfect solution.)
Hm, how can possibly complex algorithm outperform simple
reference counting. Try to measure deallocation speed.
Allocation in GC is same as manual allocation. But
deallocation is where it performs complex algorithm.
Greets
Looks appealing in the local scope of things.
If you get too obscessed trying to save some machine cycles,
then yes, you do have a point.
The problem is in application of any complexity, even worth mentioning,
you are no longer dealing with machine instructions, however appealing
it might look.
You are dealing with SYSTEMS.
You are dealing with structures and higher level logical constructs.
By simply changing your architecture, you may achieve orders of
magnitude more performance. And performance is not the only thing
that counts in something in the real world, although it counts
probably more than ohter things.
Except stability.
And what other things are functionality, flexibility, configurability,
the power and clarity of your user interface, that turns out to be
one of the most imporant criteria, and plenty of other things.
Yes, if you think of your code as an assembly level set of instructions,
and not matter which instruction you are looking at, you are trying
to squeze every single machine cycle out of it, then you are not
"seeing the forest for the trees".
What I see using my program is not how efficient some subcomponent
is, but how many hours does it take me to process vast amounts
of information. I could care less if GC exist, except it helps me
more than it creates problems for it, and I don't even need to
prove it to anybody. It is self evident to me. After a while, you
stop questioning certain things if you saw a large enough history.
What is the point to forever flip those bits?
Let language designers think about these things, and I assume they
have done as good of a job doing it, as state of the art allows,
especially if they are getting paid tons of money for doing that.
I trust them. I may not agree with some things, and my primary
concerns nowadays are not whether GC is more or less efficient,
but how fast I can model my app, how easy it is to do that,
how supportive my IDE, how powerful my debugger is, how easy it
is for me to move my app to a different platform and things
like this.
You can nitpick all you want, but I doubt you will be able to
prove anything of substance by doing that kind of thing.
To me, it is just a royal waste of time. Totally unproductive.
>>>> And that is one of central issues with Java.
>>> Yes.
>>> I think java is designed in such way that it will still be slow in
>>> comparison to other compiled languages...if it is compiled
>>> language.
>>
>> First, Java is a compiled language, and second, it's not slower
>> than any of the other compiled languages, globally. (Specific
>> programs may vary, of course.)
>
>Java is compiled language in a sense that any interpreted language
>is run time compiled...
Not true.
>but that does not makes those languages
>compiled...
Java IS compiled. Period.
Would you argue with a concept of P-Machine on the basis that
it is "interpetive", just because it uses the higher level
abstraction, sitting on the top of O/S?
Java does not evaluate strings run time and it is a strongly
typed language, and that IS the central difference between
what I call dynamically scoped languages and statically
scoped languages.
It does not matter to me if Java runs bytecodes or P-Machine
code. It is just another layer on the top of O/S, and that layer,
by the sheer fact that it is a higher level abstraction,
can optimize things under the hood MUCH better than you can
optimize things with languages with lower levels of abstraction.
For some reason, people have gotten away from coding in
assembly languages for most applications.
This is exactly the same thing.
What is the difference between C++ and C?
Well, the ONLY difference I know is higher level of abstraction.
And that is ALL there is to it.
The same exact thing as Java using the JVM to provide it the
underlying mechanisms, efficient enough and flexible enough
for you to be able to express yourself on a more abstract level.
And that is ALL there is to it.
And why do you think weakely typed languages are gaining ground?
Well, because you don't have to worry about all those nasty
things as arguments. They can be anything in run time.
And nowdays, the power of the underlying hardware is such,
that it no longer makes such a drastic difference whether you
run a strongly typed, compiled language or interpret it on
the fly, even though performance is order of magnitudes worse.
You need to put things in perspective.
What does it matter to me if web page renders in 100 ms.
versus 1 ms.?
NONE.
My brain can not work that fast to read anything in those 99 ms.
anyway.
I think the whole argument is simply a waste of time, or rather,
a waste of creative potential that could be used for something
WAY more constructive and WAY more "revolutionary".
>For example php with popen calling c executable in my experience
>is about three times faster as a server than jetty/solr
>for example...
Well, if you use even PHP as some kind of argument, then you
are obviosly not seen the forest. Because PHP is one of the
worst dogs overall. Because it is weakly typed language.
Even Python beats it hands down.
No. cath is not in php, rather c executable for every request
initializes about 256 mb of ram of data every time and
uses simple printfs to return through pipe result to php,
and perfomrs three times faster then java jetty/solr which
hodls everything initizalized in memory...
as a search engine...
This is a blanket statement by someone who is obscessed with
machine cycles while his grand piece of work is not even worth
mentioning, I'd say.
>No matter what, one can optimize as far as it
>goes just simple cases. Anything non trivial,
Correct.
>and very difficult to optimize like java code...
I don't have to optimize Java code in any special way.
It is the same way no matter WHAT language it is.
One more time: to me, program is a SYSTEM.
And the MOST critical parameter in the system is:
STABILITY.
Why? Because if your program is not stable, you are dead.
Yes, everyone wants performance. No question about it.
You don't want to sit there for 30 seconds waiting for your
frozen GUI to get unfrozen so you can enter some parameters
or type something somewhere.
And the reason it is frozen for that long of a time is not
matter of machine instructions or the "efficiency" of your
code. It is a matter of TOTALLY wrong design.
Program is not just a hack and tons of "efficient" spaghetti
code. It is HIGLY complex system with billions of interactions,
and MANY substystems cooperating under the hood.
It is not some fancy hex calculator where you flip some bits.
Unless programs are viewed as a system, you will be trying
to pick a piece of crap from some output hole and look at it
with a magnifying glass trying to make conclusions as what
the human being.
Looks like it is a matter of life and death to you.
But I doubt you can win this argument.
>> [...]
>>> And somebodey tried to convince me that conservative GC is
>>> faster that shared_ptr/auto_ptr (what a ....;)
>>
>> And you refused to even look at actual measurements. I'm aware
>> of a couple of programs where the Boehm collector significantly
>> out performs boost::shared_ptr. (Of course, I'm also aware of
>> cases where it doesn't. There is no global perfect solution.)
>
>Hm, how can possibly complex algorithm outperform simple
>reference counting. Try to measure deallocation speed.
>Allocation in GC is same as manual allocation. But
>deallocation is where it performs complex algorithm.
And so it goes "till your nose goes blue"...
:--}
Well, I've started and stopped application which controlled Shanghai
airptort back in 1993. Two stratus engineers besides me , I work
on 4 terminals in emacs with C language and VOS operating
system...
I was hired by stratus then as expert for c programming language...
Greets
--
http://.....
I DO like that one. What a master stroke!
:--}
>>> Because there is no fastest and simpler way to perform
>>> collection, than to stop program, perform collection in
>>> multiple threads, then let program work....
>>
>> Try Googleing for "incremental garbage collection".
>>
>Incremental garbage collection is form of collection when
>you don;t free everything immediately, but this does not
>change a fact whenever you have to see if something is referenced
>or not you have to stop program and examine pointers,
Yes. This IS becoming a matter of life and death issue it seems.
:--}
>which of course kills performance of threads...
>
>Greets
--
Ok, I give, Merry Christmass! ;)
>
Greets
--
http://.......
Don't know what you mean by that, but yes, sounds impressive.
> Two stratus engineers besides me , I work
>on 4 terminals in emacs with C language and VOS operating
>system...
Wooo! That's definetely impressive.
Good. Than fix C++ so I can go back to it.
After all, it is one of the first "higher level" languages
I had to deal with. I kinda hunts you...
>I was hired by stratus then as expert for c programming language...
Good. I don't remember what Stratus stands for, but I do recall
hearing it somewhere on more or less big scale. What did they do?
Well, I'd be curious to see more specifics on this.
You do not have to give. Otherwise, what are we going to do here?
:--}
I'd just like to see something more elegant of an argument.
I thought I give up ;) , Merry Christmass, gain ;)
>
>> Greets
> Java is compiled language in a sense that any interpreted language
> is run time compiled...but that does not makes those languages
> compiled...
What about JIT compilation?
LR
Greets
If that counts, than it is compiled language...
Greets
> Yes, of course, it all starts with the fact, that one cannot
> compare the speed of languages but only the speed of
> specific programs running under a specific /implementation/
> of a language running under a specific operating system
> running on a specific hardware.
Yes. And the fact that any given implementation (and any given
language, for that matter) will have it's strong points and its
weak points. If you want a language to look good, you write to
its strong points, and to the other languages weak points.
> What is language-specific is only the fact that some
> language features make some kinds of optimization possible
> (like restrict in C) or impossible (e.g., when aliasing by
> pointers is possible).
There's possible and impossible, but but there's also
difficulty. There are C++ compilers, for example, which use
profiler output, and if it makes a difference, generate two
versions of a function, depending on whether there is aliasing
or not, with a quick test at the top to decide which one to use.
But it's a lot more effort than in Java. Similarly, given an
array of Point (where Point is basically 2 double), it's
perfectly conceivable that a Java compiler treat it as an array
of double[2]. But it's a lot more work, and a lot less likely,
than for a C++ compiler.
> So, if I had to implement some algorithm, I would not refuse
> Java from the first, because it is slow , but do some
> benchmarking with code in the direction of that algorithm.
Exactly. Most of the time, what eliminates Java is that it
doesn't support really robust programming.
> After all, /if/ Java is sufficiently fast for my purpose,
> it gives me some conveniences, such as run-time array index
> checking, automatic memory management and freedom from
> the need for (sometimes risky) pointer arithmetics.
It also pretty much makes programming by contract impossible,
requires implementations of concrete classes to be in the same
file as the class definition, and does a number of other things
which make programming in the large difficult.
That said, it's not a bad language for small non-critical
applications. And it has a pretty nice GUI library, and is well
integrated in web server environments.
--
James Kanze
[...]
> >> virtual machine is also heavy performance killer...
> >Which explains why some of the leading experts in
> >optimization claim that it is necessary for the best
> >optimization. (I don't fully buy that claim, but a virtual
> >machine does have a couple of advantages when it come to
> >optimizing: it sees the actual data being processed, for
> >example, and the actual machine being run on, and can
> >optimize to both.)
> Yep. And the more high level some abstraction is, the more
> performance it can gain and the less of even theoretical
> advantage any other approach may claim.
More generally, the more information a compiler has, including
information concerning why some operation is taking place, the
better it can optimize. It's pretty well established that when
the language has built in bounds checking (so the compiler knows
the why of the comparisons), the compiler can generate better
code.
In the case of a VM, of course, the compiler has very exact
knowledge about the input data, the frequency of the various
paths, and the CPU it is running on. All of which are important
information. Where I have my doubts is because in a VM, the
compiler is severely limited in the time it can take for its
analysis; if an optimizing compiler takes a couple of hours
analysing all of the variations, fine, but in a VM? But since
I'm not an expert in this field, I don't really know.
> >> > And that is one of central issues with Java.
> >> Yes.
> >> I think java is designed in such way that it will still be
> >> slow in comparison to other compiled languages...if it is
> >> compiled language.
> >First, Java is a compiled language, and second, it's not
> >slower than any of the other compiled languages, globally.
> >(Specific programs may vary, of course.)
> And that is exactly what I am seeing in my own situation.
The few measurements I've made would bear you out. I'm sure
that there are applications where C++ will be faster, and there
are probably some where Java will be faster, but for most
applications, C++ is chosen not for speed, but because it has
greater expressibility.
--
James Kanze
[...]
> >Virtual machines are always slower then real machines....
> This is a blanket statement by someone who is obscessed with
> machine cycles while his grand piece of work is not even worth
> mentioning, I'd say.
Above all, it's a statement made by someone who's never made any
actual measurements, nor spoken to experts in the field.
--
James Kanze
How do you know that?
Greets
No. Java is chosen because it is simplified language which
anyone can learn and maintain in one month. C++ is ugly
and complex language with lot of traps and
require several years and tears to learn.
Java sacrifices lot of things, but is faster language
in a sense that programmer can use it effectively and
write working programs in much shorter time then in c++.
Simple as that. While c++ has more tools and power
as language, once you've learn it you can do better
than in java. But lot of programmers are not capable
to produce code in time, therefore java wins.
Greets.
Not necessarily. But I am impressed already.
>I can;t tell you that...
But can you tell me more specifics on how exactly things
differed so drastically.
Interesting subject.
Not sure if it is that simple to argue this expressibility argument.
Enough to say that one programmer thought to replace php
part with java server, and of course php was faster.
He didn;t figure that out either...
Greets
--
http:
Wow. That bites. I'd be curious to see some specifics on this.
>> After all, /if/ Java is sufficiently fast for my purpose,
>> it gives me some conveniences, such as run-time array index
>> checking, automatic memory management and freedom from
>> the need for (sometimes risky) pointer arithmetics.
>
>It also pretty much makes programming by contract impossible,
>requires implementations of concrete classes to be in the same
>file as the class definition,
And THAT is a ROYAL drag. No questions about it.
> and does a number of other things
>which make programming in the large difficult.
Well... I don't know what kind of things you are talking about.
>That said, it's not a bad language for small non-critical
>applications.
Except it is routinely used in MASSIVELY scaled apps
in banks, Wall street, etc. I just see those guys too often.
> And it has a pretty nice GUI library, and is well
>integrated in web server environments.
--