Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

thread interruption points

52 views
Skip to first unread message

Christopher Pisz

unread,
Feb 19, 2015, 7:55:27 PM2/19/15
to

First, I just wanted to take a second and say that interruption points
on threads are wonderful...so far

I am using boost, because my employer is still requiring I support XP,
and this msvc2010, which doesn't seem to have C++11 threads, but as I
understand it they are almost identical.

If I interrupt a worker thread from my main thread, it throws a
thread_interrupted exception from any "interruption points". For my
purposes, they will be sleeps.

So, my worker thread must wrap all of its work in a try catch and can
for example's sake, log that it was interrupted, and exit in the catch.

This makes for a great way to shut everything down.

What I am worried about is, there are sleeps all over the place in the
worker thread, in many many classes. Am I going to have to go through
each class (I didn't write) and make sure they do something sane in
their destructors as the exception makes its way up the call stack? or
alternatively insert a lot of catch statements and try to handle things
there before rethrowing?

What are other interruption points that I should look at?

-----
I have chosen to troll filter/ignore all subthreads containing the
words: "Rick C. Hodgins", "Flibble"
So, I won't be able to see or respond to any such messages
-----

Robert Wessel

unread,
Feb 20, 2015, 12:00:03 AM2/20/15
to
On Thu, 19 Feb 2015 18:54:59 -0600, Christopher Pisz
<nos...@notanaddress.com> wrote:

>
>First, I just wanted to take a second and say that interruption points
>on threads are wonderful...so far
>
>I am using boost, because my employer is still requiring I support XP,
>and this msvc2010, which doesn't seem to have C++11 threads, but as I
>understand it they are almost identical.
>
>If I interrupt a worker thread from my main thread, it throws a
>thread_interrupted exception from any "interruption points". For my
>purposes, they will be sleeps.
>
>So, my worker thread must wrap all of its work in a try catch and can
>for example's sake, log that it was interrupted, and exit in the catch.
>
>This makes for a great way to shut everything down.
>
>What I am worried about is, there are sleeps all over the place in the
>worker thread, in many many classes. Am I going to have to go through
>each class (I didn't write) and make sure they do something sane in
>their destructors as the exception makes its way up the call stack? or
>alternatively insert a lot of catch statements and try to handle things
>there before rethrowing?


Pretty much yes.

But as far as I'm concerned, that's the wrong way to deal with things
like shutdown. Interruptions are too low level to be useful. Just
because you're in a wait doesn't mean you're in a convenient place to
shut the thread down. Sending a message of some sort to a thread to
shut down, which the thread checks for at reasonable points, makes
much more sense. An event semaphore is often useful. On those waits
where you want to allow shutdown, you can wait for the shutdown
semaphore in addition to the other items. And where it's appropriate,
you can poll it with a timed wait (with a zero timeout).

Luca Risolia

unread,
Feb 20, 2015, 6:02:50 AM2/20/15
to
On 20/02/2015 01:54, Christopher Pisz wrote:
> I am using boost, because my employer is still requiring I support XP,
> and this msvc2010, which doesn't seem to have C++11 threads, but as I
> understand it they are almost identical.

No, boost and the current C++ standard are not identical with respect to
interruptible threads, as there is no concept of "thread interruption"
in the standard (of course, this does not mean you cannot implement it
with the tools present in the standard library).

> Am I going to have to go through
> each class (I didn't write) and make sure they do something sane in
> their destructors as the exception makes its way up the call stack?

Your question has probably nothing to do with interruptible threads in
particular. Destructors should never throw (except in one or two rare
cases where it is known to be safe), if this is what you meant.

Chris Vine

unread,
Feb 20, 2015, 8:18:44 AM2/20/15
to
I don't agree with that. I find thread interruption/cancellation
useful for some cases, provided you choose the interruption/
cancellation point by blocking interruption/cancellation by default. If
you are using POSIX or boost threads, there is little point in
reinventing the wheel with your own event semaphores to achieve the
same end. (And busy waiting on a zero timeout is also highly
undesirable.) When using POSIX threads then, subject to blocking,
recent versions of the BSDs, linux since kernel 2.6 and all modern
commercial unixes will unwind the stack on cancellation via a
pseudo-exception, which is of course what you want. (Boost, which
implements interruption at the library level, will also unwind the
stack.)

The key is (i) always used deferred cancellation, and (ii) block
cancellation/interruption by default so you control in _your_ code the
points where cancellation/interruption is active. This avoids the
problems alluded to by the OP. Since windows threads don't offer that
by themselves, with windows native threads you have to resort to your
kind of approach if you are not using boost. That may be where your
background comes from.

The main problem with boost thread interruption is the paucity of
interruption points. That is inevitable with a library level solution
such as boost, because at the library level you cannot easily interrupt
blocking system calls. The same applies to your technique. POSIX on
the other hand makes most blocking system calls also thread
cancellation points. This makes thread interruption/cancellation
with C++ only a really useful technique with POSIX/unix-like
platforms. This is also why C++11/14 does not provide for thread
interruption, and probably never will.

Chris

Scott Lurndal

unread,
Feb 20, 2015, 9:41:40 AM2/20/15
to
Something like this, perhaps (d_exit is marked volatile):

/**
* Execute I/O Control Blocks queued to this DLP instance.
*
* This is the main loop of the per-DLP instance thread.
*
* Loop until the d_exit flag is set to true.
* Set d_exited flag when the thread terminates to notify
* terminator that the thread terminated successfully.
*/
void
c_dlp::run(void)
{
int diag;
c_dlist pending;
char tname[64];

c_processor::set_this(d_processor);
c_system::set_this(mp->get_system());
c_memory::set_this(mp->get_system()->get_mem());

snprintf(tname, sizeof(tname), "%04lu %s DLP",
d_channel, get_hardware_name());
set_threadname(tname);

pending.init();

lock_thread();
while (!d_exit) {
while (pending.is_empty()) {
c_dlist_iterator di(&d_iocbs);
while (di.next()) {
c_iocb *iocb = (c_iocb *)di.curr();

iocb->remove();
pending.insert(iocb);
}
if (!pending.is_empty()) break;
diag = pthread_cond_wait(&d_wait, &t_threadlock);
if (diag != 0) {
d_logger->log("%04lu/00 Unexpected cond-wait result: %s\n",
d_channel, strerror(diag));
}
if (d_exit) break;
}

if (d_exit) break;

unlock_thread();
c_dlist_iterator worklist(&pending);
while (worklist.next()) {
c_iocb * iocb = (c_iocb *)worklist.curr();
iocb->remove();

switch (iocb->get_op()) {
case IOD_CANCEL:
cancel(iocb);
break;
case IOD_READ:
read(iocb);
break;
case IOD_WRITE:
write(iocb);
break;
case IOD_TEST:
if (iocb->get_op_var1() == IOD_TEST_IDENTIFY) {
set_rd(iocb, IOT_COMPLETE|IOT_EXT_RD_PRESENT,
0x0000, d_testid << 8);
} else {
test(iocb);
}
break;
case IOD_ECHO:
echo(iocb);
break;
}
}
lock_thread();
pending.init();
}
d_exited = true;

unlock_thread();
}

/**
* Destructor for a Data Link Processor. Tell the device thread
* to exit and wait for it to terminate.
*/
c_dlp::~c_dlp(void)
{

lock_thread();

d_exit = true;
pthread_cond_signal(&d_wait);

unlock_thread();

if (!in_context()) {
(void) join();
}
}

/**
* Queue an IOCB to a DLP. Used by the SPIO instruction to deliver
* an IOCB to a DLP.
*
* Called from a processor thread.
*
* @param iocb The IOCB to deliver.
*/
void
c_dlp::queue(c_iocb *iocb)
{
if (mp->get_system()->is_iotrace(iocb->get_channel())) {
iocb->dump(d_logger, "Queue ", false);
}

lock_thread();
d_iocbs.insert(iocb);
pthread_cond_signal(&d_wait);
unlock_thread();
if (mp->get_host_cores() == 1) {
sched_yield(); // Let the DLP thread run
}
}

Paavo Helde

unread,
Feb 20, 2015, 1:52:57 PM2/20/15
to
Christopher Pisz <nos...@notanaddress.com> wrote in news:mc60js$ksf$1@dont-
email.me:
>
> Am I going to have to go through
> each class (I didn't write) and make sure they do something sane in
> their destructors as the exception makes its way up the call stack?

Pretty much any C++ code piece can throw at least std::bad_alloc, so all
code should be exception safe anyway. If you have so far not cleaned up
your code base to have proper RAII and exception safety, then it's last
time to do it.

Cheers
Paavo

Marcel Mueller

unread,
Feb 20, 2015, 4:46:49 PM2/20/15
to
On 20.02.15 19.52, Paavo Helde wrote:
> Pretty much any C++ code piece can throw at least std::bad_alloc, so all
> code should be exception safe anyway.

Well, pretty much any existing C++ application will never recover from
an allocation error in a reasonably way. Simply because almost
everything requires some successful memory allocations. Even if only for
allocation of the std::string with the error message.

> If you have so far not cleaned up
> your code base to have proper RAII and exception safety, then it's last
> time to do it.

This might be a good advise anyway.


Marcel

Luca Risolia

unread,
Feb 20, 2015, 5:44:37 PM2/20/15
to
Il 20/02/2015 22:46, Marcel Mueller ha scritto:
> On 20.02.15 19.52, Paavo Helde wrote:
> Well, pretty much any existing C++ application will never recover from
> an allocation error in a reasonably way.

C++ provides you a reasonable way.

> Simply because almost
> everything requires some successful memory allocations.

This is one of the purposes of the new-handler function. See
std::set_new_handler.

Christopher Pisz

unread,
Feb 20, 2015, 6:58:59 PM2/20/15
to
Can you explain what you mean by "blocking interruption/cancellation by
default." I don't understand.

I do understand the unwinding of the stack.


> The key is (i) always used deferred cancellation, and (ii) block
> cancellation/interruption by default so you control in _your_ code the
> points where cancellation/interruption is active. This avoids the
> problems alluded to by the OP. Since windows threads don't offer that
> by themselves, with windows native threads you have to resort to your
> kind of approach if you are not using boost. That may be where your
> background comes from.

I also don't understand "deferred cancellation." Can you explain or post
a link?

> The main problem with boost thread interruption is the paucity of
> interruption points. That is inevitable with a library level solution
> such as boost, because at the library level you cannot easily interrupt
> blocking system calls. The same applies to your technique. POSIX on
> the other hand makes most blocking system calls also thread
> cancellation points. This makes thread interruption/cancellation
> with C++ only a really useful technique with POSIX/unix-like
> platforms. This is also why C++11/14 does not provide for thread
> interruption, and probably never will.
>
> Chris
>

I also don't understand "paucity of interruption points."
I might be missing out on some good information here.



--
I have chosen to troll filter/ignore all subthreads containing the
words: "Rick C. Hodgins", "Flibble", and "Islam"

Robert Wessel

unread,
Feb 21, 2015, 2:31:02 AM2/21/15
to
On Fri, 20 Feb 2015 17:58:47 -0600, Christopher Pisz
He just means spending most of the time in the thread with a
boost::this_thread::disable_interruption instance in scope, and only
enabling interuptions at specific points where you're willing to
handle one.


>I do understand the unwinding of the stack.
>
>
>> The key is (i) always used deferred cancellation, and (ii) block
>> cancellation/interruption by default so you control in _your_ code the
>> points where cancellation/interruption is active. This avoids the
>> problems alluded to by the OP. Since windows threads don't offer that
>> by themselves, with windows native threads you have to resort to your
>> kind of approach if you are not using boost. That may be where your
>> background comes from.
>
>I also don't understand "deferred cancellation." Can you explain or post
>a link?


Again, just having interruptions disabled most of the time. Then
it'll be held and will fire when the code in the thread enables them.

Which is roughly the same as what I said.


>> The main problem with boost thread interruption is the paucity of
>> interruption points. That is inevitable with a library level solution
>> such as boost, because at the library level you cannot easily interrupt
>> blocking system calls. The same applies to your technique. POSIX on
>> the other hand makes most blocking system calls also thread
>> cancellation points. This makes thread interruption/cancellation
>> with C++ only a really useful technique with POSIX/unix-like
>> platforms. This is also why C++11/14 does not provide for thread
>> interruption, and probably never will.
>>
>> Chris
>>
>
>I also don't understand "paucity of interruption points."
>I might be missing out on some good information here.


There are a rather limed number of points where boost::thread will
allow an interruption. For example, if you issue a fread(), that will
not be interrupted, since you're down in the bowels of the OS. The
boost:thread doc lists them:

http://www.boost.org/doc/libs/1_54_0/doc/html/thread/thread_management.html

Öö Tiib

unread,
Feb 21, 2015, 11:50:43 AM2/21/15
to
On Friday, 20 February 2015 23:46:49 UTC+2, Marcel Mueller wrote:
> On 20.02.15 19.52, Paavo Helde wrote:
> > Pretty much any C++ code piece can throw at least std::bad_alloc, so all
> > code should be exception safe anyway.
>
> Well, pretty much any existing C++ application will never recover from
> an allocation error in a reasonably way.

What you are saying is that majority of applications have never been
stress-tested. Pretty much all C++ (and C) applications that have been
stress-tested by semi-proficient quality engineer will recover
reasonably. Difficulties might be only when underlying platform is
configured to behave in brain-damaged way like with that "Linux OOM
Killer".

> Simply because almost everything requires some successful memory
> allocations. Even if only for allocation of the std::string with
> the error message.

So the application can be on worst case in situation where it can
do nothing from that "almost everything". That appears to be plenty
for to behave reasonably.

Chris Vine

unread,
Feb 21, 2015, 12:25:29 PM2/21/15
to
On Fri, 20 Feb 2015 17:58:47 -0600
Christopher Pisz <nos...@notanaddress.com> wrote:
[snip]
> I also don't understand "deferred cancellation." Can you explain or
> post a link?

Robert Wessel has answered most of your questions. My reference to
deferred cancellation was in relation to POSIX threads, because boost
thread interruption is always deferred. In addition, both boost and
POSIX threads have the concept of interruption/cancellation being at
any time either enabled or disabled (blocked) for a particular thread.

With POSIX threads, cancellation can either be asynchronous or deferred.
With asynchronous cancellation, the thread will be cancelled
approximately immediately (for whatever the implementation defines as
"immediately") and should never be used. It is of historic interest
only. Deferred cancellation only takes effect when the thread of
execution to be cancelled meets a cancellation point (aka in boost an
interruption point), and is the default with pthreads. Either form of
cancellation can be blocked by the thread concerned setting its
cancellation state to disabled, which is equivalent in boost to having a
boost::this_thread::disable_interruption object in scope. A blocked
interruption/cancellation event is not lost, unless one is already
pending. It is stored until the moment (if at all) when cancellation is
enabled and (for deferred cancellation) a cancellation point is
subsequently reached.

The combination of deferred cancellation and blocking enables a thread
to control the point in the code at which it is willing to accept a
cancellation event and die, and to do so in a controlled way.

Native windows threads do not have deferred cancellation (or at least,
didn't when I last looked at them), so while in windows cancellation is
available it is unusable in reliable code (that is, it can only be
used at program shutdown).

> I also don't understand "paucity of interruption points."
> I might be missing out on some good information here.

The only boost interruption points are in effect waiting on a
condition variable, waiting on a join, waiting on a sleep or testing
for a pending interruption event with
boost::this_thread::interruption_point(). In particular, a boost
interruption event will not cause blocking system calls to unblock.
This is the more-or-less inevitable result of providing a portable
library solution, as opposed to a system level solution.

Chris

Melzzzzz

unread,
Feb 21, 2015, 12:34:17 PM2/21/15
to
On Sat, 21 Feb 2015 08:50:29 -0800 (PST)
Öö Tiib <oot...@hot.ee> wrote:

> On Friday, 20 February 2015 23:46:49 UTC+2, Marcel Mueller wrote:
> > On 20.02.15 19.52, Paavo Helde wrote:
> > > Pretty much any C++ code piece can throw at least std::bad_alloc,
> > > so all code should be exception safe anyway.
> >
> > Well, pretty much any existing C++ application will never recover
> > from an allocation error in a reasonably way.
>
> What you are saying is that majority of applications have never been
> stress-tested. Pretty much all C++ (and C) applications that have been
> stress-tested by semi-proficient quality engineer will recover
> reasonably. Difficulties might be only when underlying platform is
> configured to behave in brain-damaged way like with that "Linux OOM
> Killer".

Problem is that memory allocators (especially GC) tend to reserve huge
amount of RAM, (not to mention forks) therefore overcommit and OOM
killer....

Marcel Mueller

unread,
Feb 21, 2015, 2:20:28 PM2/21/15
to
On 21.02.15 17.50, Öö Tiib wrote:
>> Well, pretty much any existing C++ application will never recover from
>> an allocation error in a reasonably way.
>
> What you are saying is that majority of applications have never been
> stress-tested. Pretty much all C++ (and C) applications that have been
> stress-tested by semi-proficient quality engineer will recover
> reasonably.

I have never seen C++ code that is really safe with respect to memory
exceptions. Sometimes it might work, but as soon as there is even a
single allocation during stack unwinding it is likely to run into the
same problem again. The logging functionality usually already breaks
with this rule.
And as soon as a function with throw() is in the call stack that does
not handle the exception at all places, std::abort will terminate the
application. From the applications point of view this is undefined
behavior. From the language point of view of course not.

> Difficulties might be only when underlying platform is
> configured to behave in brain-damaged way like with that "Linux OOM
> Killer".

:-)

>> Simply because almost everything requires some successful memory
>> allocations. Even if only for allocation of the std::string with
>> the error message.
>
> So the application can be on worst case in situation where it can
> do nothing from that "almost everything". That appears to be plenty
> for to behave reasonably.

But how to continue in a useful way? When the expected functionality is
unsustainable without some free memory the application likely will no
longer work. Unless the stack unwinding happens to free a reasonable
amount of memory, everything else is just luck.

The C++ language is not well designed to provide guaranteed behavior
with respect to memory demand. It is simply not known how much memory is
required to operate properly. Software producers test the application
and add some safety factor for the system requirements.

There have been languages that do suffer from this. I remember OCCAM
that computes the required stack usage of any thread at compile time.
But many algorithms cannot be used in such an environment.


Marcel

Öö Tiib

unread,
Feb 21, 2015, 3:16:45 PM2/21/15
to
On Saturday, 21 February 2015 21:20:28 UTC+2, Marcel Mueller wrote:
> On 21.02.15 17.50, Öö Tiib wrote:
> >> Well, pretty much any existing C++ application will never recover from
> >> an allocation error in a reasonably way.
> >
> > What you are saying is that majority of applications have never been
> > stress-tested. Pretty much all C++ (and C) applications that have been
> > stress-tested by semi-proficient quality engineer will recover
> > reasonably.
>
> I have never seen C++ code that is really safe with respect to memory
> exceptions. Sometimes it might work, but as soon as there is even a
> single allocation during stack unwinding it is likely to run into the
> same problem again. The logging functionality usually already breaks
> with this rule.

Perhaps we use different idioms. Most of the code that I have worked
with have majority of destructors empty and anything the non-empty
ones do may throw nothing. Fairly simple loggers can be written
totally bulletproof and throw nothing (even no 'std::bad_alloc's).

> And as soon as a function with throw() is in the call stack that does
> not handle the exception at all places, std::abort will terminate the
> application. From the applications point of view this is undefined
> behavior. From the language point of view of course not.

There are some static analysis tools that check for possibility of
uncaught exceptions. Whole code base analysis is often slow so it is
better to run such things on separate computers.

> > Difficulties might be only when underlying platform is
> > configured to behave in brain-damaged way like with that "Linux OOM
> > Killer".
>
> :-)
>
> >> Simply because almost everything requires some successful memory
> >> allocations. Even if only for allocation of the std::string with
> >> the error message.
> >
> > So the application can be on worst case in situation where it can
> > do nothing from that "almost everything". That appears to be plenty
> > for to behave reasonably.
>
> But how to continue in a useful way? When the expected functionality is
> unsustainable without some free memory the application likely will no
> longer work. Unless the stack unwinding happens to free a reasonable
> amount of memory, everything else is just luck.

It depends if there are anything useful that the application can do
within given limits. For example if there are no enough memory for it to
read its configuration at start-up then it can still tell it as excuse
for not to start.

> The C++ language is not well designed to provide guaranteed behavior
> with respect to memory demand. It is simply not known how much memory is
> required to operate properly. Software producers test the application
> and add some safety factor for the system requirements.
>
> There have been languages that do suffer from this. I remember OCCAM
> that computes the required stack usage of any thread at compile time.
> But many algorithms cannot be used in such an environment.

There can be some experimental narrow-purpose language that can
mathematically prove its maximum resource consumption. General purpose
programming language can not be designed to provide that on all cases.
Finding out limits and how application handles the problems is done by
stress-testing.

Paavo Helde

unread,
Feb 21, 2015, 4:41:29 PM2/21/15
to
Marcel Mueller <news.5...@spamgourmet.org> wrote in news:mcalpi$dud$1
@gwaiyur.mb-net.net:

> On 21.02.15 17.50, 嘱 Tiib wrote:
>>> Well, pretty much any existing C++ application will never recover
from
>>> an allocation error in a reasonably way.
>>
>> What you are saying is that majority of applications have never been
>> stress-tested. Pretty much all C++ (and C) applications that have been
>> stress-tested by semi-proficient quality engineer will recover
>> reasonably.
>
> I have never seen C++ code that is really safe with respect to memory
> exceptions. Sometimes it might work, but as soon as there is even a
> single allocation during stack unwinding it is likely to run into the
> same problem again. The logging functionality usually already breaks
> with this rule.

Running out of memory is pretty fatal, but it's more complicated than
that. Some large image/data processing programs may often allocate memory
blocks of hundreds of MB or even gigabytes. Statistically such
allocations have a greater chance to trigger out-of-memory conditions
than small allocations. If such a large allocation fails, then there is
still plenty of room to allocate any error and logger messages, no
problem here.

Rather, the problems appear because there is usually some swap/pagefile
configured. The OS typically pushes other programs and parts of the same
program to the swap and the system starts heavily trashing. Typically the
programs are not well tested when running in swap and start failing in a
random fashion. Even if they are not failing, the computer is pretty much
unusable anyway. Depending on the OS and running programs, a computer
restart might be the best option to come out of the trashing mode. I am
starting to think that turning the pagefile off completely might be the
best approach.

> And as soon as a function with throw() is in the call stack that does
> not handle the exception at all places, std::abort will terminate the
> application. From the applications point of view this is undefined
> behavior. From the language point of view of course not.

That's why throw specifications is a deprecated antifeature. And
functions with an empty throw clause should not call anything non-
trivial, if they do there is a large problem between the keyboard and
chair.

> But how to continue in a useful way? When the expected functionality is
> unsustainable without some free memory the application likely will no
> longer work. Unless the stack unwinding happens to free a reasonable
> amount of memory, everything else is just luck.

The memory is a global resource; if it is exhausted because some other
program tried to eat it all, then I think a good approach is to wait
silently in the OOM handler until the situation gets better, then retry
the allocation. Alternatively, if the program itself is the memory hog,
then it can probably release a lot of it by stack unwinding (in the
correct stack!), then report a failure.

>
> The C++ language is not well designed to provide guaranteed behavior
> with respect to memory demand. It is simply not known how much memory
is
> required to operate properly. Software producers test the application
> and add some safety factor for the system requirements.
>
> There have been languages that do suffer from this. I remember OCCAM
> that computes the required stack usage of any thread at compile time.
> But many algorithms cannot be used in such an environment.

Dynamic memory allocation can be handled relatively well in C++. Stack
overflow is a different beast altogether, there are no standard
mechanisms for dealing with that and most program(mer)s just ignore the
problem and hope they get lucky.

Cheers
Paavo

Ian Collins

unread,
Feb 21, 2015, 11:40:24 PM2/21/15
to
Not all operating systems are foolhardy enough to allow memory over commit.

--
Ian Collins

Marcel Mueller

unread,
Feb 22, 2015, 4:00:10 AM2/22/15
to
On 21.02.15 22.41, Paavo Helde wrote:
> Running out of memory is pretty fatal, but it's more complicated than
> that. Some large image/data processing programs may often allocate memory
> blocks of hundreds of MB or even gigabytes. Statistically such
> allocations have a greater chance to trigger out-of-memory conditions
> than small allocations. If such a large allocation fails, then there is
> still plenty of room to allocate any error and logger messages, no
> problem here.
>
> Rather, the problems appear because there is usually some swap/pagefile
> configured. The OS typically pushes other programs and parts of the same
> program to the swap and the system starts heavily trashing. Typically the
> programs are not well tested when running in swap and start failing in a
> random fashion. Even if they are not failing, the computer is pretty much
> unusable anyway. Depending on the OS and running programs, a computer
> restart might be the best option to come out of the trashing mode. I am

You are right. I can confirm this. Win7 discards the disk cache on
suspend to disk. This is comparable to swapping after resume. About 1GB
of data is read with heavy disk activity in the first few minutes after
resume - probably in 4k blocks due to page faults. In this time the
system is almost unresponsive and random faults occur from time to time.
E.g. drivers that do no longer recognize there devices or program
windows can no longer rearrange their Z order (This can happen to any
window including simple explorer windows.). Of course, nothing bad
happens as long as you have only a few application windows open at
suspend and the cache is quite small. So the concept is well designed to
survive a feature presentation, no more no less. (What the hell came
over them when they decided to discard the cache.)

I once run into a similar problem on a Linux VM server too. I started
one VM too much and the memory got very low. It was impossible to get a
shell to suspend one of the VMs in a reasonable amount of time. So I
decided to prefer a hard reset.


> starting to think that turning the pagefile off completely might be the
> best approach.

Unfortunately you have to be careful here. Depending on the OS this
might have unwanted side effects. Some OS refuse overcommitment of
memory when there is absolutely no swap. Maybe some reliable operating
mode intended for cash terminals or something like that. This will
likely throw out of memory exceptions very soon when using ordinary
desktop applications.
Other OS simply ignore your configuration and create a temporary swap
file on the system volume in this case.


> That's why throw specifications is a deprecated antifeature.

Is it?

> And
> functions with an empty throw clause should not call anything non-
> trivial, if they do there is a large problem between the keyboard and
> chair.

Well that's the old discussion whether to have checked exceptions or
not. Unfortunately when using generic functors or lambdas you have
almost no choice. You cannot reasonably use checked exceptions with
them, as it would require the throws declaration to be a type argument.
(Is this allowed at all?)


> The memory is a global resource; if it is exhausted because some other
> program tried to eat it all, then I think a good approach is to wait
> silently in the OOM handler until the situation gets better, then retry
> the allocation. Alternatively, if the program itself is the memory hog,
> then it can probably release a lot of it by stack unwinding (in the
> correct stack!), then report a failure.

I think it always depend on the individual case. And the basic question
is simply who pays to cover all this cases. Probably no-one.


> Dynamic memory allocation can be handled relatively well in C++.

In the language: yes. In C++ libraries, well, it depends.

> Stack
> overflow is a different beast altogether, there are no standard
> mechanisms for dealing with that and most program(mer)s just ignore the
> problem and hope they get lucky.

Indeed.


Marcel

Paavo Helde

unread,
Feb 22, 2015, 5:17:16 AM2/22/15
to
Marcel Mueller <news.5...@spamgourmet.org> wrote in
news:mcc5qf$3jg$1...@gwaiyur.mb-net.net:

> On 21.02.15 22.41, Paavo Helde wrote:
>
>> That's why throw specifications is a deprecated antifeature.
>
> Is it?

Yes. See Annex D (normative) - Compatibility features:

"D.4 Dynamic exception specifications [depr.except.spec]
The use of dynamic-exception-specifications is deprecated."

Instead, one should use the new C++11 'noexcept' specification.

For motivations, see: http://www.open-
std.org/jtc1/sc22/wg21/docs/papers/2010/n3051.html

Cheers
Paavo



0 new messages