If we do not (strongly) advocate RDID it seems that disposal of the
resource in the destructor is merely a safetymeasure.
Ideas?
--
Martin Fabian http://www.s2.chalmers.se/~fabian/
--
Ask enough experts. Eventually you'll get the answer you want.
/* Remove NOSPAM from reply-to address to mail me */
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
[ about comp.lang.c++.moderated. First time posters: do this! ]
I would say that resource disposal in the destructor should be approached
somewhat like preallocation in a std::vector<>. You can use a vector<>
without worrying about how much to allocate. Whenever you need a new
element, just push_back(). But if you know how many elements you are going
to need, you can reserve that space ahead of time with a method call. This
reservation doesn't impact the behavior of the rest of the program at all.
You still use push_back() to add items to the vector; you just don't have to
worry about excessive reallocations. If, for any reason, you guessed wrong,
everything still works; it's just less efficient.
Likewise with resources. If you choose to, you can provide a method to
deallocate resources early, to let the system recycle them sooner. However,
if the user chooses not to use this facility, the destructor should
correctly handle the resources. I would say 99% of the time, the destructor
will end up handling it anyway; there isn't often much call for returning a
resource distinct from its encapsulating object.
If someone tries to use a resource after it's been returned, your class can
either reacquire the resource, or if it's not available or semantically
incorrect, you can throw an exception. But in any event you get
fundamentally correct resource management. I would also stress that this
should be optional, both for the writer and for the user of the class.
Writing a class that encapsulates a resource shouldn't mandate providing
early release functionality. Likewise, using a class that provides such
functionality shouldn't mandate that it be used to prevent leaks.
An example of this philosophy is the behavior of fstream. You often
allocate a resource (a file) when the constructor is invoked. However, you
don't have to wait for the destructor to be called to free said resource;
you can call close(). Nevertheless, if you fail to call close() yourself,
the resource is still properly managed.
It some cases it makes sense for an object to guarantee that it "owns"
a resource for its entire lifetime. You might call this "pure" RAII, in
that the object always acquires the resource in its constructor (or throws
if it can't), and does not release the resource until the destructor
executes. This has the advantage of simplifying class invariants by
eliminating the possibility of a "null" state. The ability to assume the
resource exists could simplify the destructor as well as other code.
That said, what I'm calling "pure" RAII might not be the best design for
some classes. For one thing, if failure to acquire a particular type of
resources is not truly exceptional then throwing might not be the right
way to report such an error; instead, the constructor would need to
intialize place the object in a "null" or "error" state. In addition,
forcing the resource lifetime to coincide with the object lifetime might
be overly restrictive in some cases.
The basic_ifstream class illustrates both issues. Failure to open a file
is not really exceptional so it makes more sense for the client code to
detect this case by calling is_open() instead of catching an exception.
In addition, one might usually be content to let the opening and closing
of the file coincide with the object's lifetime, but occasionally it is
useful to defer opening the file or to close the file early.
> If we do not (strongly) advocate RDID it seems that disposal of the
> resource in the destructor is merely a safetymeasure.
I would say rather than RAII provides exception-safety and simplifies
the usual case where acquisition and disposal of a resource coincide
with the lifetime of an object; however, that doesn't mean a class
shouldn't also provide explicit open/close (or similar) methods if they
are sometimes useful.
> We all know the benefits of RAII and why you should use it (we on this
> list at least, see
> http://www.eclipse.org/articles/swt-design-2/swt-design-2.html for fuel
> for your discussions with your Java friends). However, is it also the
> case that we should go for RDID (Resource Disposal Is Destruction)? That
> is, should we avoid/disallow "premature" resource disposal and only
> allow disposal through the destructor? For one thing, this saves keeping
> track of whether the resource should or should not be disposed of in the
> destructor. Are there other benefits? Drawbacks?
I have always assumed that RAII meant what you call RDID. I believe most
other programmers would concur.
I recently made a proposal for an RAII construct for Java on the
comp.lang.java.machine NG. It generated some more fuel for discussion
and a number of Java programmers who had little or no idea what the
purpose of what I was proposing was about.
Reading the referenced paper, I am reminded of one of my biggest objections to
Java - the lack of a destructor. I realize that C++ and Java are different
languages with different goals, but I find the lack to be curious.
As to your question - RDID in my view is a 1:1 mapping with RAII. If I
acquire
in the constructor, then I should release in the destructor.
I may want to explicitly close the the resource sooner, but the destructor
should always (enforce) clean up.
In C++ we have constructors and destructors - Use Them. :-)
sdg
One of the ideas around RAII is that having the handle means having the
resource. If you didn't get the resource, the handle ctor would have
thrown. Of course, to keep this relation, you may never dispose of the
resource without disposing of the handle. And that means RDID.
Another way to look at it: having a release() function isn't sensible if
you don't have an obtain() function, to reuse the handle. And obtain()
functions are evil. So release() functions are evil, too. Which means RDID.
I think this is a general guideline: Everything that must be done in two
steps, must be done by a ctor/dtor. Header/Footer for a document: Use
the ctor to make a header, and the dtor to make a footer.
Corrolary: if the second step can fail, it could be done in one step.
Regards,
--
Michiel Salters
Consultant Technical Software Engineering
CMG Trade, Transport & Industry
Michiel...@cmg.nl
> I recently made a proposal for an RAII construct for Java on the
> comp.lang.java.machine NG. It generated some more fuel for discussion
> and a number of Java programmers who had little or no idea what the
> purpose of what I was proposing was about.
I've tried that as well. Most seem happy with littering their code with
finally statements to clean up after themselves. Unfortunately many
former C++ programmers get tricked into thinking that they can use
finalize method for this. Fortunately now that myth is pretty much known.
I still get a chuckle when people tell me that Java is so much like C++
and it's easy to switch to.
> Another way to look at it: having a release() function isn't
> sensible if you don't have an obtain() function, to reuse the
> handle. And obtain() functions are evil. So release() functions are
> evil, too. Which means RDID.
I agree with you on obtain()'s evil. I used to agree on release() too,
but recently found an exception to your rule: scoped locking.
Consider a by-value "channel" or queue intended for concurrent use. If
you keep to the spirit of std::stack and std::queue and maintain the
top()/pop() exception-safe split, you need a "read lock" on the
queue's head that allows exclusive reads with the option to pop the
head upon lock disposal. (Simply: atomic read/pop without copying
values.)
That's not too hard to write. The "read lock" constructor waits until
the queue is non-empty, then acquires the queue's head lock, which it
holds until the lock destructor runs. So long as you hold the "read
lock," you can read the value of the queue's head value and no other
thread can read or pop the head.
Now, suppose that the values stored in the queue are not
default-constructible. How can you copy a value from the head of the
queue that will *live longer than the lock itself*? That is, we want
to release the "read lock" as quickly as possible. It turns out to be
impossible - unless you support a release() function on the "read
lock."
Let's look at an example snippet of a consumer thread's loop. In these
examples, queue_t::front is the "read lock" type.
,----[ Initialize value, dispose scoped lock, use value ]
| queue_t q;
| // ...
| {
| queue_t::front f( q );
| const some_type val = f.get();
| f.set_to_pop( true )
| // f destroyed, lock released
| }
| do_lengthy_processing( val ); // ERROR: val doesn't exist here
`----
We can't do that.
,----[ Declare value, assign, dispose scoped lock, use value ]
| queue_t q;
| // ...
| some_type val; // ERROR: val isn't default-constructible
| {
| queue_t::front f( q );
| val = f.get();
| f.set_to_pop( true )
| // f destroyed, lock released
| }
| do_lengthy_processing( val );
`----
We can't do that either.
,----[ Process value while locked ]
| queue_t q;
| // ...
| {
| queue_t::front f( q );
| do_lengthy_processing( f.get() ); // PROBLEM: working while locked
| f.set_to_pop( true )
| // f destroyed, lock released
| }
`----
Here we're wasting time holding the lock while doing the lengthy
processing. (Unless copying the value is more costly than the liveness
lost to the processing time, but then you probably would have stored
pointers rather than values anyway.)
The solution requires the member function
queue_t::front::release(bool pop = false):
Here's a final example using release():
,----[ Initialize value, release scoped lock, use value ]
| {
| queue_t::front f( q );
| const some_type val = f.get();
| f.release( true ); // popped, lock released
| do_lengthy_processing( val );
| }
`----
One need not call queue_t::front::release() explicitly in all
cases. The destructor will release the lock, but will not pop the
queue. The reason for not popping unless directed is covered in the
recent thread on RAII with checkin/checkoutš.
This scheme depended upon boost::scoped_lock::unlock()š. If the Boost
designers stuck to a Draconian scoped lock idiom and didn't provide
unlock(), queue_t::front::release() could not be written. Like
const-correctness, an omission down low compromises all code that
builds upon it.
I can share more of this queue class if you're interested.
Footnotes:
š http://groups.google.com/groups?hl=en&threadm=9q1vgm%242170%241%40nntp6.u.washington.edu&rnum=1&prev=/groups%3Fhl%3Den%26selm%3D9q1vgm%25242170%25241%2540nntp6.u.washington.edu
˛ http://www.boost.org/libs/thread/doc/scoped_lock.html
--
Steven E. Harris :: seha...@raytheon.com
Raytheon :: http://www.raytheon.com
> I still get a chuckle when people tell me that Java is so much like C++
> and it's easy to switch to.
>
Yeah, me too. With only a superficial knowledge of Java, I always
thought that it was like that. Now that I (by force) get to really use
Java I discover how entirely different the concept really is. I guess
we've all got fooled by the appearent similarity, similar keywords,
c-inheritance etc... There's lots of minor differences that really
matter. Not the least that Java seems to automatically make you use
inheritance for implementation re-use and not "for real".
--
Martin Fabian http://www.s2.chalmers.se/~fabian/
--
Ask enough experts. Eventually you'll get the answer you want.
/* Remove NOSPAM from reply-to address to mail me */
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
I don't see the necessary relation between release() and obtain(). I
sometimes need to release a resource at different points. This
requires a release(). Also, could you elaborate on the
obtain()-is-evil theme? Almost all of my classes have an init() method
in addition to constructor. Why is this evil?
On a related note, how do you declare arrays of handles? Does this
means you forbid copy-constructor and assignment? How do you pass or
return handles by value?
> Corrolary: if the second step can fail, it could be done in one step.
I also don't understand this point and how you propose to implement
this?
Nobody else has mentioned it so I'll chip in...
The problem with the 'pure' RDID approach is that sometimes an
error occurs during the resource disposal phase, and sometimes
that error is one that we should like to report by throwing an
exception.
If we stipulate that destructors should not throw exceptions -
and I think most people agree that we should - we can't report on
such an error (at least, not by our usual thow) if we use the
'pure' RDID idiom.
You're right, though, that RAII does really imply RDID - I'd go
so far as to say that the main /point/ of RAII is that disposal
is destruction (or rather "destruction is disposal" i.e. disposal
occurs automatically on destruction) - it's just tricky to manage
when an error occurs.
There's a discussion on ways in which exceptions within
exceptions might be handled going on in another thread at the
moment, it has some bearing on what we're saying here.
Cheers,
Daniel
[nospam.demon.co.uk is a spam-magnet. Replace nospam with
sonadata to reply]
Your example cries for a generic solution. And, lo and behold, Andrei
Alexandrescu has created one: It's called ScopeGuard.
Take a look at
http://www.cuj.com/experts/1812/alexandr.htm?topic=experts
-- kga
>Nobody else has mentioned it so I'll chip in...
>
>The problem with the 'pure' RDID approach is that sometimes an
>error occurs during the resource disposal phase, and sometimes
>that error is one that we should like to report by throwing an
>exception.
I'm not sure about this. Resources are typically dispersed from
a resource pool, This pool can be empty, but can it overflow ?
After all, you can't return more to the pool than you got from it.
In general, on resource disposal, any problems with the resource
no longer is a problem for the resource handle (which no longer
exists) but a problem for the resource pool instead.
Regards,
--
Michiel Salters
Consultant Technical Software Engineering
CMG Trade, Transport & Industry
Michiel...@cmg.nl
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
Because you create uninitialised objects, both before calling init()
and after release(). These objects have no use, and therefore shouldn't
exists.
>On a related note, how do you declare arrays of handles? Does this
>means you forbid copy-constructor and assignment? How do you pass or
>return handles by value?
std::vector<handle> ? Of course, when you need to copy handles, you'll
have to take that into account when writing the handle class. But do
you really want to copy a handle? What does that mean, logically, in your
program ?
>> Corrolary: if the second step can fail, it could be done in one step.
>
>I also don't understand this point and how you propose to implement
>this?
This is a point about why dtors usually shouldn't throw. If the second
step in a process fails, you remain stuck in the intermediate phase. If
this is acceptable, you can hardly call it an intermediate phase.
Regards
--
Michiel Salters
Consultant Technical Software Engineering
CMG Trade, Transport & Industry
Michiel...@cmg.nl
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
> Your example cries for a generic solution. And, lo and behold,
> Andrei Alexandrescu has created one: It's called ScopeGuard.
Thanks for the suggestion. I have heard of ScopeGuard, but I don't
think ScopeGuard's dismissal facility would work right for my "front"
class. Unlike ScopeGuard, my "front" class does something _active_
upon dismissal. Take a look:
template <typename T>
class queue : boost::noncopyable
{
typedef boost::mutex mutex_type;
typedef mutex_type::scoped_lock lock_type;
public:
typedef T value_type;
// [...]
class front : boost::noncopyable
{
public:
front(queue& q)
: q_( q ),
lock_( q_.head_mutex_ ),
next_( q_.wait_for_head() )
{}
value_type& get()
{ return next_.value_; }
const value_type& get() const
{ return next_.value_; }
void release(bool pop_now = false)
{
// Only pop if still locked - should we check or trust the
// caller not to call release() more than once?
// if ( lock_.locked() )
if ( pop_now )
pop();
lock_.unlock();
}
private:
void pop()
{ q_.pop_front_i(); }
queue& q_;
lock_type lock_;
node& next_;
};
friend class front;
class const_front : boost::noncopyable
{
public:
const_front(const queue& q)
: lock_( q.head_mutex_ ),
next_( q.wait_for_head() )
{}
const T& get() const
{ return next_.value_; }
private:
lock_type lock_;
const node& next_;
};
friend class const_front;
// [...]
};
--
Steven E. Harris :: seha...@raytheon.com
Raytheon :: http://www.raytheon.com
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
> An example of this philosophy is the behavior of fstream. You often
> allocate a resource (a file) when the constructor is invoked.
> However, you don't have to wait for the destructor to be called to
> free said resource; you can call close(). Nevertheless, if you fail
> to call close() yourself, the resource is still properly managed.
This is actually a bad example for RAII, since open/close do a lot
more than acquire/free resources.
In general, I would only speak of RAII when it is a question of pure
acquistion and liberation. Acquistion can fail, but I would argue
that if RAII is being used, failure should be exceptional. (This is
the case of memory on a modern system, or a mutex lock, where the
program pauses until it acquires the lock, for example.) Liberation
cannot fail. Because if liberation takes place in the destructor, and
it fails, you have no way of reporting the error.
In the case of an open file, failure of the open is not really
exceptional. And a close *can* fail; not verifying the error
condition when closing an output file is a serious programming error.
The implicit close in the destructor is thus only a safeguard, to be
used, for example, when the function terminates because of an
exception, but not in the normal case.
--
James Kanze mailto:ka...@gabi-soft.de
Beratung in objektorientierer Datenverarbeitung --
-- Conseils en informatique orientée objet
Ziegelhüttenweg 17a, 60598 Frankfurt, Germany, Tél.: +49 (0)69 19 86 27
If I accept your premise that resources are allocated from a
pool then I think that there are still several ways in which
returning a resource to a pool can fail without the pool
'overflowing'. The pool itself may have become inaccessible for
some reason, for example.
However, I was thinking more of examples like the one James
Kanze gives in his posting
<d6651fb6.01120...@posting.google.com> elsewhere in
this thread: that of an object controlling a file which is
closed on object destruction. If the file close fails (perhaps
because of a network outage that prevents buffers from being
flushed) there is a very real error that we should like to be
able to process (somehow).
Cheers,
Daniel
[nospam.demon.co.uk is a spam-magnet. Replace nospam with
sonadata to reply]
[snip]
> In the case of an open file, failure of the open is not really
> exceptional. And a close *can* fail; not verifying the error
> condition when closing an output file is a serious programming error.
> The implicit close in the destructor is thus only a safeguard, to be
> used, for example, when the function terminates because of an
> exception, but not in the normal case.
>
Well, I defer to your knowledge of course, but I think the programming
error lies in not making files an object that can be reliably destroyed,
i.e. they are incomplete, rather than saying that rdid is an incomplete
idiom.
It might be even more correct to go a little farther and say that your
objection lies in the problem with destructors and exceptions in C++. If
we could reliably signal destruction failure, than this idiom would be even
more powerful. Now it may be important to note that I think C++
destructors are the best "object lifetime management" scheme I can think
of, clearly in my mind superior to any non deterministic system used by the
dynamic languages (java, lisp, etc). And I think exceptions are
inappropriate for C++ (though the only alternatives I can think of are
rather draconian -- though so is the result of violating exception
specifications). So I'm tainted, but I definitely try to use destructors
as an object lifetime management scheme rather than just a memory
management scheme (some of the dynamic languages do obviate this by have
nifty macros such as with_open_file in LISP). Do I need to discontinue
this policy, am I mixing the concepts, or should I note the exceptional
case of fstreams, and still be ok in striving to use constructors and
destructors to manage object lifetimes instead of just memory?
Thanks for any insight
Brad
b...@cox.rr.com
Thank you!
I've been trying to get people to understand the basic principles of
resource management for years, and somehow very few ever seem to get
it. :-(
> I don't see the necessary relation between release() and
> obtain(). I sometimes need to release a resource at
> different points. This requires a release(). Also, could you
> elaborate on the obtain()-is-evil theme? Almost all of my
> classes have an init() method in addition to constructor.
> Why is this evil?
I wrote an article on resource management a while back, which
discusses these issues, and is generally relevant to this thread. If
you have an XML-capable browser, you can find a copy at:
http://www.chrisnewton.btinternet.co.uk/pp/resource.xml
(Sorry, I haven't gotten around to converting the site to plain HTML
yet.)
Hope it helps,
Chris
That's certainly possible. But I think that's a pool error, not
a resource error, so the error handling should be done via the pool
object.
>However, I was thinking more of examples like the one James
>Kanze gives in his posting
><d6651fb6.01120...@posting.google.com> elsewhere in
>this thread: that of an object controlling a file which is
>closed on object destruction. If the file close fails (perhaps
>because of a network outage that prevents buffers from being
>flushed) there is a very real error that we should like to be
>able to process (somehow).
>
>Cheers,
> Daniel
Actually, I've been arguing for quite a while that Core Issue 216 by
Herb Sutter should be ruled Not A Defect. Issue 216 would disallow
throwing destructors, because they could be unsafe. So I'm not against
them, I just think they should be very rare. In the case of files
failing to close, for instance, I'm inclined to say the file object
should remain available ( to try to flush the buffer once the network
is up again ). If the file object is lost, no such opportunities are
available anymore. Therefore the close action should be a member.
Unconditionally dumping the file can be done by a wrapper - but yes,
you can write a wrapper the other way around.
Regards,
--
Michiel Salters
Consultant Technical Software Engineering
CMG Trade, Transport & Industry
Michiel...@cmg.nl
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
Is std::auto_ptr evil then?
There are situations in multithreaded programming where having a
release() without an obtain() is the only reliable way of avoiding race
conditions.
Sentences that end with "are evil" are evil.
Gerhard Menzl
Well, there are many times when an object has to wait until the data
to initialize it becomes known. Especially in the case of class
members.
> std::vector<handle> ? Of course, when you need to copy handles, you'll
> have to take that into account when writing the handle class. But do
> you really want to copy a handle? What does that mean, logically, in your
> program ?
Depends on the type. For examples, auto_ptr is used to transfer
ownership, and does come with initialization and release methods. On
many other classes, re-initialization means to drop the previous
"whatever" and use the new "whatever". For example, I have a class to
lock an output stream (for multi-threaded writes) which switches which
stream is locked when assigned a new one.
> This is a point about why dtors usually shouldn't throw. If the second
> step in a process fails, you remain stuck in the intermediate phase. If
> this is acceptable, you can hardly call it an intermediate phase.
Well, I still don't see how this is related with obtain()/release()
and RAII in general. A real-world example I have been faced with is
the case of a file format that contains blocks with header and footer.
I use RAII to manage these headers and footers automatically. The code
would be a lot less clear and more error prone if I didn't use RAII.
Yet, the destructor can fail.
The design principle is that all exception are fatal the the
processing. So the destructor throws if it can (no other exception
active) else it let the active exception take the processing down.
Note that in this design, RAII classes that can throw in destructor
manage no resource.
> >However, I was thinking more of examples like the one James Kanze
> >gives in his posting
> ><d6651fb6.01120...@posting.google.com> elsewhere in this
> >thread: that of an object controlling a file which is closed on
> >object destruction. If the file close fails (perhaps because of a
> >network outage that prevents buffers from being flushed) there is a
> >very real error that we should like to be able to process
> >(somehow).
> Actually, I've been arguing for quite a while that Core Issue 216 by
> Herb Sutter should be ruled Not A Defect. Issue 216 would disallow
> throwing destructors, because they could be unsafe.
Well, it's certainly not a defect in the sense of the standard
process, since we knew about the safety issues when the standard was
written, and decided explicitly to allow them. One could argue that
this was a mistake, but if we start banning everything that can be
dangerous, there won't be much left of C++.
> So I'm not against them, I just think they should be very rare. In
> the case of files failing to close, for instance, I'm inclined to
> say the file object should remain available ( to try to flush the
> buffer once the network is up again ).
If you're talking in the sense of the destructor throwing, I'm not
sure I understand this. Given the following code :
void
f()
{
std::ofstream out( "filename" ) ;
generateOutput( out ) ;
}
What are you saying should happen if the close in the destructor
fails?
Personally, I don't think that ofstream can reasonably do much other
than what it does. And that such code is wrong, because of this. But
I'm very open to other suggestions.
More generally, of course, the destructor has to make every effort to
clean up, release resources, etc. That doesn't mean that such measures
(such as close, here) aren't simply fallbacks, or that the programmer
should count on them. In case of an error (say, generateOutput
throws), the user must consider what should happen with the partially
written output file. Typically, just closing it isn't enough. If the
file is some sort of temporary, that because of the error won't be
used, he must also remove it. If it is an existing file, to which he
is making modifications, he may even be required to restore it to its
previous state -- a very non-trivial exercise.
--
James Kanze mailto:ka...@gabi-soft.de
Beratung in objektorientierer Datenverarbeitung --
-- Conseils en informatique orientée objet
Ziegelhüttenweg 17a, 60598 Frankfurt, Germany, Tél.: +49 (0)69 19 86 27
[ Send an empty e-mail to c++-...@netlab.cs.rpi.edu for info ]
Core Issue 216 has been ruled NAD. Do you possibly mean Core Issue 219?
Absolutely, but these activities may not logically be a part of
releasing the resource you acquired when you opened the file. That's not
to say they can't be; if you have a specialised "temporary file" class,
perhaps it really does always try to delete the file in the d-tor, and
have some sensible error-handling policy if that fails for some reason.
They're certainly not a part of routine clean-up in a generic file
class' d-tor, though.
C++ has try-catch constructs for a reason. I think this sort of issue is
a clear case for their use around the code that can cause the errors,
and has no bearing on the use of RAII/RDID to open and close the file
itself. I apply the same logic to buffered files; if you can't take
losing data if the close can't write the buffer, flush it yourself
before you close the file.
Cheers,
Chris
> > In case of an error (say, generateOutput throws), the user must
> > consider what should happen with the partially written output
> > file. Typically, just closing it isn't enough. If the file is
> > some sort of temporary, that because of the error won't be used,
> > he must also remove it. If it is an existing file, to which he is
> > making modifications, he may even be required to restore it to its
> > previous state -- a very non-trivial exercise.
> Absolutely, but these activities may not logically be a part of
> releasing the resource you acquired when you opened the file. That's
> not to say they can't be; if you have a specialised "temporary file"
> class, perhaps it really does always try to delete the file in the
> d-tor, and have some sensible error-handling policy if that fails
> for some reason. They're certainly not a part of routine clean-up
> in a generic file class' d-tor, though.
That would be one solution. In the specific case of temporary files,
I suspect that it is *the* solution. More generally, however, I think
that the correct solution will depend on the problem to be solved.
> C++ has try-catch constructs for a reason. I think this sort of
> issue is a clear case for their use around the code that can cause
> the errors, and has no bearing on the use of RAII/RDID to open and
> close the file itself.
Try-catch doesn't do anything that RAII can't do. The problem rests:
you are walking back the stack because of one error. Somewhere in the
cleanup (catch block or destructor, it doesn't matter), you encounter
an error which must be reported. You can't use throw to do it,
because throwing an exception in the destructor will terminate the
program, and throwing it in the catch block will cause the previous
error to be lost.
> I apply the same logic to buffered files; if you can't take losing
> data if the close can't write the buffer, flush it yourself before
> you close the file.
It may be a possibly reasonable decision to separate the final flush
from the close, but that's not the way C++ (nor the C library which
preceded it) work. Close is a complex function which has several
roles.
My own non-standard solution would be to have two close's, one which
commits, and one which rolls back. The destructor would call the roll
back version, so if you hadn't already committed, the file system is
restored to its previous state. On most OS's, however, this is going
to be an expensive proposition.
--
James Kanze mailto:ka...@gabi-soft.de
Beratung in objektorientierer Datenverarbeitung --
-- Conseils en informatique oriente objet
Ziegelhttenweg 17a, 60598 Frankfurt, Germany, Tl.: +49 (0)69 19 86 27
I use file objects this way, and it works quite well for me. The only
drawback is that the object itself must live on the freestore.
> I would say rather than RAII provides exception-safety and simplifies
> the usual case where acquisition and disposal of a resource coincide
> with the lifetime of an object; however, that doesn't mean a class
> shouldn't also provide explicit open/close (or similar) methods if they
> are sometimes useful.
And users can still easily get open/close-like functionality by using
a smart pointer, which provide a generic way to increase control over
resource lifetimes, and still keeps the classes itself simple. Everybody
knows (maintaining) invariants sucks :-)
--
Arnold Hendriks <a.hen...@b-lex.com>
B-Lex Information Technologies, http://www.b-lex.com/