I was reading Richter's Windows Programming book, and he mentions that
it is better to avoid the ExitProcess(), TerminateThread(),
ExitThread() kind of calls. He says that eventhough the OS resources
and kernel objects are destroyed and cleand up, the C/C++ objects will
not be cleaned up.
Question: how are the C/C++ objects different from the OS resources?
Whatever memory occupied by these objects is going to be taken from
the process's address space, right? And when the process ends, the
corresponding address space is going to destroyed anyway...
thanks,
Driz
"DRIZAII" <dri...@yahoo.com> wrote in message
news:d17b26b5.04062...@posting.google.com...
> Just to keep this quesiotn phylosophical, a C++ object is an OS-resource
> and an attached behavior.
> If it's completely true that the OS will reclaim handle, memory and other
> OS resources
> at the termination of the executable, it's true that an ExitProcess will
> not invoke distructors of global/static objects, and other out-of-band
> termination routine
> will not call destructors of local objects as well.
> If one of your C++ object has a destructor that will write a record to a
> database,
> then your out-of-band termination will leave your database incomplete,
> even if the OS will correcly reclaim (for example) the opened handles
> to the file where the database is hosted.
> You have to read the warning "the C++ objects
> will not be cleaned" as "Their destructors will not be called".
> The actual break-down of what exactly happens upon process
> termination and thread termination is really quite more complicated,
> but, the statement gives the general idea.
Long story short: is a BAD habit do to nontrivial things other than reclaim
ancillary memory in destructors. You'd be better to have them in an
explicit method like i.e. a close() thing.
Giancarlo Niccolai.
Huh? How exactly close() is better than a destructor in case of
ExitProcess()? If you have external resources that can become corrupted when
some sequence of operations is not completed (like DB) then you need a
transactional system. All modern DBs can do that. Create the transaction in
the constructor and abort it in the destructor unless Commit() had been
called. Ideally Commit() should call 'delete this' or similar to prevent the
object from living in a zombie state.
In general destructors are always better than an explicit close() method.
They free you from manual resource management and dealing with zombie-state
objects.
--
Eugene
> Long story short: is a BAD habit do to nontrivial things other than
> reclaim
> ancillary memory in destructors. You'd be better to have them in an
> explicit method like i.e. a close() thing.
Then what happens when these objects go out of scope? Or are you
suggesting that we be super careful to always call 'close' before an object
goes out of scope? What happens if we call another member function after
calling 'close'? Must we be super careful to never do that? Or should each
member function make sure the object is properly open?
You're suggesting giving up one of the significant features of C++.
DS
You should still clean up stuff in your destructor if the client fails to
invoke a close()-like call, but it's much harder to do error-handling then.
[...]
> You should still clean up stuff in your destructor if the client fails to
> invoke a close()-like call, but it's much harder to do error-handling
> then.
Any cleanup code that throws is broken. Because if it throws after cleanup
is finished, it is pointless. If it throws before, it is useless. And if it
throws in the middle, it is more than useless.
S
Exactly. For the same reason functions like WSACleanup() should return void.
There absolutely nothing one can do if it fails other than abort.
In general an operation that can fail _and_ this failure could be reasonably
handled by the user is _not a cleanup_ and does not belong to destructor.
--
Eugene
> > You should still clean up stuff in your destructor if the client fails
to
> > invoke a close()-like call, but it's much harder to do error-handling
> > then.
> Any cleanup code that throws is broken. Because if it throws after cleanup
> is finished, it is pointless. If it throws before, it is useless. And if
it
> throws in the middle, it is more than useless.
Then how do you suggest handling errors from functions such as close() (eg,
errors due to buffer flushing) ? Simply ignore them?
close() isn't a cleanup-only function, so suggesting that such a function
should be in a destructor and not called explicitly may not always be a good
idea.
> Then how do you suggest handling errors from functions such as close()
> (eg, errors due to buffer flushing) ? Simply ignore them?
Yes. An application that is not prepared to handle that error will not be
able to handle it anyway. How useful is it when the buffer is not flushed
AND the application has crashed?
An application that _is_ prepared to handle that error may call flush() or
perhaps open the file in a write-through mode in the first place.
> close() isn't a cleanup-only function, so suggesting that such a function
> should be in a destructor and not called explicitly may not always be a
> good idea.
Having such a function leads only to trouble and clumsy code. What is the
state of the object when close() has been called? What can you do to it? Yet
because the object is still there, it may still be called, and the compiler
will not be able to detect that -- which means a responsible implementor of
the class will have to devote half his time to sanity checks. The users of
the class will either have to call close() in each place where processing
must be aborted due to an error, instead of simply returning -- or they will
have to provide a wrapper class that will call close() in its destructor --
and we're back to the original problem.
S
Just to add to Slava's reply, the probability of an error due to flushing on
close() is close to zero while probability of an error due to calling a
function in an incorrect object state or forgetting to call close() is about
100%. Any solution that eliminates the _possibility_ of the second case is
way more important. Only _after_ you have dealt with the larger problem it
makes sense to deal with the flushing issue. For most desktop applications
simply ignoring it can be ok. Other systems can live with manual flush()-ing
surrounded by error handling. For really critical data simple file storage
is not appropriate altogether. You will need a robust database to ensure
data integrity.
--
Eugene
Yes. What are you supposed to do if it fails?
if (!close())
/* clean up */
How do you clean up without calling close?
Or, in C++ terms: resource acquisition is initialisation. Therefore resource
release is uninitialisation. If your object has gone away (i.e. the
destructor has returned), the resources have been released (i.e. the
destructor has succeeded). If an object's destructor fails, how are you
supposed to recover? You're left with a half-initialised object, which
shouldn't happen.
--
Tim Robinson (MVP, Windows SDK)
http://mobius.sourceforge.net/
Arkady
"DRIZAII" <dri...@yahoo.com> wrote in message
news:d17b26b5.04062...@posting.google.com...
> Having such a function leads only to trouble and clumsy code. What is the
> state of the object when close() has been called? What can you do to it?
Either "closed" if the close had been succesful, "temporary error" if the
media was temporarily unreachable and "hard error" if the things went
badly.
We are talking not only about files, but also "in general". In general, when
you may want to do things on the object after having "stored" it, or when
you have "non trivial" close operations (i.e. closing in sequence a set of
files and their indexes), doing it ONLY from a destructor may be not a good
idea.
Giancarlo.
> Arnold Hendriks wrote:
>> Then how do you suggest handling errors from functions such as
>> close() (eg, errors due to buffer flushing) ? Simply ignore them?
>
> Yes. What are you supposed to do if it fails?
>
> if (!close())
> /* clean up */
>
> How do you clean up without calling close?
>
If (! database.close() ) {
... see what indexes have failed...
... do a backup copy of the important data...
}
(something like) this is done also in i.e. openoffice when closing a
document causes a segmentation fault or an hard error.
Giancarlo
>
> "Giancarlo Niccolai" <no...@nothing.it> wrote in message
> news:cbub8t$ot2$1...@fata.cs.interbusiness.it...
>
>> Long story short: is a BAD habit do to nontrivial things other than
>> reclaim
>> ancillary memory in destructors. You'd be better to have them in an
>> explicit method like i.e. a close() thing.
>
> Then what happens when these objects go out of scope? Or are you
> suggesting that we be super careful to always call 'close' before an
> object goes out of scope?
Yes.
> What happens if we call another member function
> after calling 'close'? Must we be super careful to never do that? Or
> should each member function make sure the object is properly open?
>
> You're suggesting giving up one of the significant features of C++.
Uhm. Cfr. Stroustrup Manual (The C++ Programming Language) . I suppose he
doesn't want to give up any of the significant feature of C++ (as he
designed it), but he suggests to follow the STL approcach to i.e. close a
file with an explicit close() call, and not from its destructor.
So, don't blame me for that. Is just that you shouldn't rely on the
destructor to do do delicate things as flushing buffers and close file. You
can call a close() method from the destructor if you detect your file to be
already open, but you should not put the code to flush the thing and close
it ONLY inside the destructor; you should put them in a METHOD that is
EVENTUALLY called from the DESTRUCTOR on need. This is "sayed" to be a good
design technique from ppl far more expert than me...
Giancarlo Niccolai.
Sorry, I meant "to give also the ability to control any non trivial close
and cleanup operations from a method and not ONLY from its destructor".
Giancarlo.