Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

[patch] CFS scheduler, -v19

19 views
Skip to first unread message

Ingo Molnar

unread,
Jul 6, 2007, 1:40:13 PM7/6/07
to

i'm pleased to announce release -v19 of the CFS scheduler patchset.

The rolled-up CFS patch against today's -git kernel, v2.6.22-rc7,
v2.6.22-rc6-mm1, v2.6.21.5 or v2.6.20.14 can be downloaded from the
usual place:

http://people.redhat.com/mingo/cfs-scheduler/

The biggest user-visible change in -v19 is reworked sleeper fairness:
it's similar in behavior to -v18 but works more consistently across nice
levels. Fork-happy workloads (like kernel builds) should behave better
as well. There are also a handful of speedups: unsigned math, 32-bit
speedups, O(1) task pickup, debloating and other micro-optimizations.

Changes since -v18:

- merged the group-scheduling CFS-core changes from Srivatsa Vaddagiri.
This makes up for the bulk of the changes in -v19 but has no
behavioral impact. The final group-fairness enabler patch is now a
small and lean add-on patch to CFS.

- fix the bloat noticed by Andrew. On 32-bit it's now this:

text data bss dec hex filename
24362 3905 24 28291 6e83 sched.o-rc7
33015 2538 20 35573 8af5 sched.o-v18
25805 2426 20 28251 6e5b sched.o-v19

so it's a net win compared to vanilla. On 64-bit it's even better:

text data bss dec hex filename
35732 40314 2168 78214 13186 sched.o.x64-rc7
41397 37642 2168 81207 13d37 sched.o.x64-v18
36132 37410 2168 75710 127be sched.o.x64-v19

( and there's also a +1.5K data win per CPU on x32, which is not
shown here. [+3.0K data win per CPU on x64.] )

- good number of core code updates, cleanups and streamlining.
(Mike Galbraith, Srivatsa Vaddagiri, Dmitry Adamushko, me.)

- use unsigned data types almost everywhere in CFS. This produces
faster and smaller code, and simplifies the logic.

- turn as many 'u64' data types into 'unsigned long' as possible, to
reduce the 32-bit footprint and to reduce 64-bit arithmetics.

- replaced the nr_running based 'sleep fairness' logic with a more
robust concept. The end-result is similar in behavior to v18, but
negative nice levels are handled much better in this scheme.

- speedup: O(1) task pickup by Srivatsa Vaddagiri. [sleep/wakeup is
O(log2(nr_running)).] This gives 5-10% better hackbench 100/500
results on a 4-way box.

- fix: set idle->sched_class back to &idle_sched_class in
migration_call(). (Dmitry Adamushko)

- cleanup: use an enum for the sched_feature flags. (suggested by
Andrew Morton)

- cleanup: turn the priority macros into inlines. (suggested by
Andrew Morton)

- (other cleanups suggested by Andrew Morton)

- debug: split out the debugging data into CONFIG_SCHED_DEBUG.

As usual, any sort of feedback, bugreport, fix and suggestion is more
than welcome!

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Willy Tarreau

unread,
Jul 8, 2007, 1:50:12 PM7/8/07
to
Hi Ingo !

On Fri, Jul 06, 2007 at 07:33:19PM +0200, Ingo Molnar wrote:
>
> i'm pleased to announce release -v19 of the CFS scheduler patchset.
>
> The rolled-up CFS patch against today's -git kernel, v2.6.22-rc7,
> v2.6.22-rc6-mm1, v2.6.21.5 or v2.6.20.14 can be downloaded from the
> usual place:
>
> http://people.redhat.com/mingo/cfs-scheduler/
>
> The biggest user-visible change in -v19 is reworked sleeper fairness:
> it's similar in behavior to -v18 but works more consistently across nice
> levels. Fork-happy workloads (like kernel builds) should behave better
> as well. There are also a handful of speedups: unsigned math, 32-bit
> speedups, O(1) task pickup, debloating and other micro-optimizations.

Interestingly, I also noticed the possibility of O(1) task pickup when
playing with v18, but did not detect any noticeable improvement with it.
Of course, it depends on the workload and I probably didn't perform the
most relevant tests.

V19 works very well here on 2.6.20.14. I could start 32k busy loops at
nice +19 (I exhausted the 32k pids limit), and could still perform normal
operations. I noticed that 'vmstat' scans all the pid entries under /proc,
which takes ages to collect data before displaying a line. Obviously, the
system sometimes shows some short latencies, but not much more than what
you get from and SSH through a remote DSL connection.

Here's a vmstat 1 output :

r b w swpd free buff cache si so bi bo in cs us sy id
32437 0 0 0 809724 488 6196 0 0 1 0 135 0 24 72 4
32436 0 0 0 811336 488 6196 0 0 0 0 717 0 78 22 0
32436 0 0 0 812956 488 6196 0 0 0 0 727 0 79 21 0
32436 0 0 0 810164 400 6196 0 0 0 0 707 0 80 20 0
32436 0 0 0 815116 400 6436 0 0 0 0 769 0 77 23 0
32436 0 0 0 811644 400 6436 0 0 0 0 715 0 80 20 0

Amusingly, I started mpg123 during this test and it skipped quite a bit.
After setting all tasks to SCHED_IDLE, it did not skip anymore. All this
seems to behave like one could expect.

I also started 30k processes distributed in 130 groups of 234 chained by pipes
in which one byte is passed. I get an average of 8000 in the run queue. The
context switch rate is very low and sometimes even null in this test, maybe
some of them are starving, I really do not know :

r b w swpd free buff cache si so bi bo in cs us sy id
7752 0 1 0 656892 244 4196 0 0 0 0 725 0 16 84 0
7781 0 1 0 656768 244 4196 0 0 0 0 724 0 16 84 0
8000 0 1 0 656916 244 4196 0 0 0 0 2631 0 9 91 0
8025 0 1 0 656916 244 4196 0 0 0 0 2023 0 11 89 0
8083 0 1 0 659124 244 4196 0 0 0 0 3379 0 10 90 0
8054 0 1 0 658752 244 4196 0 0 0 0 726 0 17 83 0

With fewer of them, I can achieve very high context switch rates, well above
what I got with v18 (around 515k) :

r b w swpd free buff cache si so bi bo in cs us sy id
48 0 0 0 1977580 5196 49900 0 0 0 28 261 579893 23 77 0
44 0 0 0 1977580 5196 49900 0 0 0 0 253 578454 17 83 0
48 0 0 0 1977580 5196 49900 0 0 0 0 254 574118 22 78 0
41 0 0 0 1977580 5196 49900 0 0 0 0 253 579261 23 77 0

(this is still on my dual athlon 1.5 GHz).

In my tree, I have replaced the rbtree with the ebtree we talked about,
but it did not bring any performance boost because, eventhough insert()
and delete() are faster, the scheduler is already quite good at avoiding
them as much as possible, mostly relying on rb_next() which has the same
cost in both trees. All in all, the only variations I noticed were caused
by cacheline alignment when I tried to reorder fields in the eb_node. So
I will stop my experimentations here since I don't see any more room for
improvement.

> - debug: split out the debugging data into CONFIG_SCHED_DEBUG.

Hmmm that must be why I do not have /proc/sched_debug anymore ;-)

> As usual, any sort of feedback, bugreport, fix and suggestion is more
> than welcome!

Done !

Cheers,
Willy

Al Boldi

unread,
Jul 8, 2007, 5:00:14 PM7/8/07
to
Ingo Molnar wrote:
> The biggest user-visible change in -v19 is reworked sleeper fairness:
> it's similar in behavior to -v18 but works more consistently across nice
> levels. Fork-happy workloads (like kernel builds) should behave better
> as well. There are also a handful of speedups: unsigned math, 32-bit
> speedups, O(1) task pickup, debloating and other micro-optimizations.

Thanks, these changes show an obvious improvement with heavy forklifters.
But there is one strange artifact; fork response is not consistant over
successive runs, i.e. sometimes it starves, sometimes it moves, and
sometimes it flies. It looks like it's dependent on some sched-stats to
accumulate before it stablizes.

Also, fork/thread creation is one aspect of testing, but what about thread
termination performance, which has to sync back to the creating thread? How
is that handled, and is there some code around that can test this?


Thanks!

--
Al

Ingo Molnar

unread,
Jul 9, 2007, 6:50:11 PM7/9/07
to

* Willy Tarreau <w...@1wt.eu> wrote:

> > The biggest user-visible change in -v19 is reworked sleeper
> > fairness: it's similar in behavior to -v18 but works more
> > consistently across nice levels. Fork-happy workloads (like kernel
> > builds) should behave better as well. There are also a handful of
> > speedups: unsigned math, 32-bit speedups, O(1) task pickup,
> > debloating and other micro-optimizations.
>
> Interestingly, I also noticed the possibility of O(1) task pickup when
> playing with v18, but did not detect any noticeable improvement with
> it. Of course, it depends on the workload and I probably didn't
> perform the most relevant tests.

yeah - it's a small tweak. CFS is O(31) in sleep/wakeup so it's now all
a big O(1) family again :)

> V19 works very well here on 2.6.20.14. I could start 32k busy loops at
> nice +19 (I exhausted the 32k pids limit), and could still perform
> normal operations. I noticed that 'vmstat' scans all the pid entries
> under /proc, which takes ages to collect data before displaying a
> line. Obviously, the system sometimes shows some short latencies, but
> not much more than what you get from and SSH through a remote DSL
> connection.

great! I did not try to push it this far, yet.

> Here's a vmstat 1 output :
>
> r b w swpd free buff cache si so bi bo in cs us sy id
> 32437 0 0 0 809724 488 6196 0 0 1 0 135 0 24 72 4
> 32436 0 0 0 811336 488 6196 0 0 0 0 717 0 78 22 0

crazy :-)

> Amusingly, I started mpg123 during this test and it skipped quite a
> bit. After setting all tasks to SCHED_IDLE, it did not skip anymore.
> All this seems to behave like one could expect.

yeah. It behaves better than i expected in fact - 32K tasks is pushing
things quite a bit. (we've got a 32K PID limit for example)

> I also started 30k processes distributed in 130 groups of 234 chained
> by pipes in which one byte is passed. I get an average of 8000 in the
> run queue. The context switch rate is very low and sometimes even null
> in this test, maybe some of them are starving, I really do not know :
>
> r b w swpd free buff cache si so bi bo in cs us sy id
> 7752 0 1 0 656892 244 4196 0 0 0 0 725 0 16 84 0

hm, could you profile this? We could have some bottleneck somewhere
(likely not in the scheduler) with that many tasks being runnable. [
With CFS you can actually run a profiler under this workload ;-) ]

> In my tree, I have replaced the rbtree with the ebtree we talked
> about, but it did not bring any performance boost because, eventhough
> insert() and delete() are faster, the scheduler is already quite good
> at avoiding them as much as possible, mostly relying on rb_next()
> which has the same cost in both trees. All in all, the only variations
> I noticed were caused by cacheline alignment when I tried to reorder
> fields in the eb_node. So I will stop my experimentations here since I
> don't see any more room for improvement.

well, just a little bit of improvement would be nice to have too :)

Ingo

Mike Galbraith

unread,
Jul 10, 2007, 4:10:08 AM7/10/07
to
Greetings,

On Fri, 2007-07-06 at 19:33 +0200, Ingo Molnar wrote:
> i'm pleased to announce release -v19 of the CFS scheduler patchset.

I've taken mainline git tree (freshly integrated CFS!) out for a
multimedia spin. I tested watching movies and listenign to music in the
presence of various sleep/burn loads, pure burn loads, and mixed loads.
All was peachy here.. I saw no frame drops or sound skips or other
artifacts under any load where the processor could possibly meet demand.

-Mike

Bill Davidsen

unread,
Jul 11, 2007, 1:30:14 PM7/11/07
to
Mike Galbraith wrote:
> Greetings,
>
> On Fri, 2007-07-06 at 19:33 +0200, Ingo Molnar wrote:
>> i'm pleased to announce release -v19 of the CFS scheduler patchset.
>
> I've taken mainline git tree (freshly integrated CFS!) out for a
> multimedia spin. I tested watching movies and listenign to music in the
> presence of various sleep/burn loads, pure burn loads, and mixed loads.
> All was peachy here.. I saw no frame drops or sound skips or other
> artifacts under any load where the processor could possibly meet demand.
>
I would agree with preliminary testing, save that if you get a lot of
processes updating the screen at once, there seems to be a notable case
of processes getting no CPU for 100-300ms, followed by a lot of CPU.

I see this clearly with the "glitch1" test with four scrolling xterms
and glxgears, but also watching videos with little busy processes on the
screen. The only version where I never see this in test or with real use
is cfs-v13.

--
Bill Davidsen <davi...@tmr.com>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot

Ingo Molnar

unread,
Jul 11, 2007, 5:00:20 PM7/11/07
to

* Bill Davidsen <davi...@tmr.com> wrote:

> > I've taken mainline git tree (freshly integrated CFS!) out for a
> > multimedia spin. I tested watching movies and listenign to music in
> > the presence of various sleep/burn loads, pure burn loads, and mixed
> > loads. All was peachy here.. I saw no frame drops or sound skips or
> > other artifacts under any load where the processor could possibly
> > meet demand.
>
> I would agree with preliminary testing, save that if you get a lot of
> processes updating the screen at once, there seems to be a notable
> case of processes getting no CPU for 100-300ms, followed by a lot of
> CPU.
>
> I see this clearly with the "glitch1" test with four scrolling xterms
> and glxgears, but also watching videos with little busy processes on
> the screen. The only version where I never see this in test or with
> real use is cfs-v13.

just as a test, does this go away if you:

renice -20 pidof `Xorg`

i.e. is this connected to the way X is scheduled?

Another thing to check would be whether it goes away if you set the
granularity to some really finegrained value:

echo 0 > /proc/sys/kernel/sched_wakeup_granularity_ns
echo 500000 > /proc/sys/kernel/sched_granularity_ns

this really pushes things - but it tests the theory whether this is
related to granularity.

Ingo

Bill Davidsen

unread,
Jul 12, 2007, 8:50:13 AM7/12/07
to
Ingo Molnar wrote:
> * Bill Davidsen <davi...@tmr.com> wrote:
>
>
>>> I've taken mainline git tree (freshly integrated CFS!) out for a
>>> multimedia spin. I tested watching movies and listenign to music in
>>> the presence of various sleep/burn loads, pure burn loads, and mixed
>>> loads. All was peachy here.. I saw no frame drops or sound skips or
>>> other artifacts under any load where the processor could possibly
>>> meet demand.
>>>
>> I would agree with preliminary testing, save that if you get a lot of
>> processes updating the screen at once, there seems to be a notable
>> case of processes getting no CPU for 100-300ms, followed by a lot of
>> CPU.
>>
>> I see this clearly with the "glitch1" test with four scrolling xterms
>> and glxgears, but also watching videos with little busy processes on
>> the screen. The only version where I never see this in test or with
>> real use is cfs-v13.
>>
>
> just as a test, does this go away if you:
>
> renice -20 pidof `Xorg`
>
> i.e. is this connected to the way X is scheduled?
>
>
Partial answer: -10 didn't help with v16, I'll try more boost ASAP, but
power has been spotty in upstate NY, 20k+ customers with none and the
rest of us subject to "load shedding" with zero warning, and the test
machines have UPS but no generator, so I hesitate to use them while
power is unstable.

> Another thing to check would be whether it goes away if you set the
> granularity to some really finegrained value:
>
> echo 0 > /proc/sys/kernel/sched_wakeup_granularity_ns
> echo 500000 > /proc/sys/kernel/sched_granularity_ns
>
> this really pushes things - but it tests the theory whether this is
> related to granularity.
>

Will do, you suggested dropping sched_granularity_ns to 1000000 earlier,
and that didn't do it, but I didn't change the wakeup, and will test
these values later today.

--
bill davidsen <davi...@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979

Bill Davidsen

unread,
Jul 13, 2007, 5:30:11 PM7/13/07
to
Ingo Molnar wrote:
> * Bill Davidsen <davi...@tmr.com> wrote:
>
>>> I've taken mainline git tree (freshly integrated CFS!) out for a
>>> multimedia spin. I tested watching movies and listenign to music in
>>> the presence of various sleep/burn loads, pure burn loads, and mixed
>>> loads. All was peachy here.. I saw no frame drops or sound skips or
>>> other artifacts under any load where the processor could possibly
>>> meet demand.
>> I would agree with preliminary testing, save that if you get a lot of
>> processes updating the screen at once, there seems to be a notable
>> case of processes getting no CPU for 100-300ms, followed by a lot of
>> CPU.
>>
>> I see this clearly with the "glitch1" test with four scrolling xterms
>> and glxgears, but also watching videos with little busy processes on
>> the screen. The only version where I never see this in test or with
>> real use is cfs-v13.
>
> just as a test, does this go away if you:
>
> renice -20 pidof `Xorg`
>
> i.e. is this connected to the way X is scheduled?
>
Doing this slows down the display rates, but doesn't significantly help
the smoothness of the gears.

> Another thing to check would be whether it goes away if you set the
> granularity to some really finegrained value:
>
> echo 0 > /proc/sys/kernel/sched_wakeup_granularity_ns
> echo 500000 > /proc/sys/kernel/sched_granularity_ns
>
> this really pushes things - but it tests the theory whether this is
> related to granularity.
>

I didn't test this with standard Xorg priority, I should go back and try
that. But it didn't really make much difference. The gears and scrolling
xterms ran slower with Xorg at -20 with any sched settings. I'll do that
as soon as a build finishes and I can reboot.

I should really go back to 2.6.21.6, 2.6.22 has many bizarre behaviors
with FC6. Automount starts taking 30% of CPU (unused at the moment), the
sensors applet doesn't work, etc. I hope over the weekend I can get bug
reports out on all this, but there are lots of non-critical oddities.
> Ingo


--
Bill Davidsen <davi...@tmr.com>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot

Markus

unread,
Jul 14, 2007, 7:40:07 AM7/14/07
to
> i'm pleased to announce release -v19 of the CFS scheduler patchset.
>
> The rolled-up CFS patch against today's -git kernel, v2.6.22-rc7,
> v2.6.22-rc6-mm1, v2.6.21.5 or v2.6.20.14 can be downloaded from the
> usual place:
>
> http://people.redhat.com/mingo/cfs-scheduler/
Well, I took a 2.6.21 (gentoo-patchset...) and applied your cfs-v19
patch for the 2.6.21 kernels. Its a amd64 system and the "normal"
(=only gentoo patches) kernel is working fine!

> As usual, any sort of feedback, bugreport, fix and suggestion is more
> than welcome!

When I start the system, the mouse is jerking like under heavy load.
(also its idle!)
Then some programs just quit. They just disappear, no message in any
log. (It were about four apps I realized in a period of about two hours
of testing.)

I'll try a git snapshot next, but is there a way I can get more output?

Greetz
Markus

Markus

unread,
Jul 14, 2007, 11:20:08 AM7/14/07
to
> I'll try a git snapshot next, but is there a way I can get more
output?
Did it. The mouse is smooth, just when one app is being quit (dont know
why...) the mouse will be jerking for a few seconds...

So, is there any way to get a more verbose output regarding the cfs?

Ed Tomlinson

unread,
Jul 14, 2007, 1:20:11 PM7/14/07
to
Hi,

I run a java application at nice 15. Its been a background application here for as long
as SD and CFS have been around. If I have a compile running at nice 0, with v19 java
gets so little cpu that the the wrapper that runs to monitor it is timing out waiting for
it to start. This is new in v19 - something in v19 is not meshing well with my mix
of applications...

Kernel is gentoo 2.6.22-r1 + cfs v19

How can I help to debug this?
Ed Tomlinson

On Friday 06 July 2007 13:33, Ingo Molnar wrote:
> i'm pleased to announce release -v19 of the CFS scheduler patchset.
>
> The rolled-up CFS patch against today's -git kernel, v2.6.22-rc7,
> v2.6.22-rc6-mm1, v2.6.21.5 or v2.6.20.14 can be downloaded from the
> usual place:
>
>     http://people.redhat.com/mingo/cfs-scheduler/
>  
> The biggest user-visible change in -v19 is reworked sleeper fairness:
> it's similar in behavior to -v18 but works more consistently across nice
> levels. Fork-happy workloads (like kernel builds) should behave better
> as well. There are also a handful of speedups: unsigned math, 32-bit
> speedups, O(1) task pickup, debloating and other micro-optimizations.

Mike Galbraith

unread,
Jul 15, 2007, 1:30:12 AM7/15/07
to
On Sat, 2007-07-14 at 13:19 -0400, Ed Tomlinson wrote:
> Hi,

Hi Ed,

> I run a java application at nice 15. Its been a background application here for as long
> as SD and CFS have been around. If I have a compile running at nice 0, with v19 java
> gets so little cpu that the the wrapper that runs to monitor it is timing out waiting for
> it to start. This is new in v19 - something in v19 is not meshing well with my mix
> of applications...
>
> Kernel is gentoo 2.6.22-r1 + cfs v19
>
> How can I help to debug this?

I hear Ingo is off having a genuine long weekend, but in the meantime,
you could try echo 30 > /proc/sys/kernel/sched_features to eliminate the
sleeper fairness changes in v19, since that was the biggest change.

Sending a few seconds of logged /proc/sched_debug will also help get a
picture of what's happening, and lovely would be a method to reproduce
the problem locally.

-Mike

Markus

unread,
Jul 15, 2007, 9:00:10 AM7/15/07
to
> Sending a few seconds of logged /proc/sched_debug will also help get a
> picture of what's happening, and lovely would be a method to reproduce
> the problem locally.

Hi. Is there anything like the sched_debug in the 2.6.22-git5?
Because I have a cfs-problem as well [1].

Markus


[1] http://lkml.org/lkml/2007/07/14/60

Mike Galbraith

unread,
Jul 15, 2007, 3:50:06 PM7/15/07
to
On Sun, 2007-07-15 at 14:53 +0200, Markus wrote:
> [1] http://lkml.org/lkml/2007/07/14/60

Hm. Tasks disappearing isn't you're typical process scheduler problem
by any means, nor is an idle box exhibiting mouse "lurchiness". Is
there anything unusual in your logs?

-Mike

Markus

unread,
Jul 15, 2007, 5:20:10 PM7/15/07
to
> > [1] http://lkml.org/lkml/2007/07/14/60
>
> Hm. Tasks disappearing isn't you're typical process scheduler problem
> by any means, nor is an idle box exhibiting mouse "lurchiness". Is
> there anything unusual in your logs?

I know that its not typical, but when my current kernel is stable and
shows the same problem with the cfs-patch applied like the
git-snapshot, I would say its a cfs issue.
There is nothing in the logs when a program dies, thats why asked for a
way to make the kernel more verbose.
But I can build a plain 2.6.22 without cfs and one with it and compare
dmesgs output, if that helps.

Markus

Mike Galbraith

unread,
Jul 16, 2007, 2:50:06 AM7/16/07
to
On Sun, 2007-07-15 at 23:11 +0200, Markus wrote:
> > > [1] http://lkml.org/lkml/2007/07/14/60
> >
> > Hm. Tasks disappearing isn't you're typical process scheduler problem
> > by any means, nor is an idle box exhibiting mouse "lurchiness". Is
> > there anything unusual in your logs?
>
> I know that its not typical, but when my current kernel is stable and
> shows the same problem with the cfs-patch applied like the
> git-snapshot, I would say its a cfs issue.

Yes, from your description, and with the now presented additional
information that the git-snapshot exhibits the same symptoms, it sounds
like cfs _may_ be implicated in some way. I can't imagine how at the
moment. In your original report, there are other patches involved,
which are an unknown variables. The git-snapshot contains very many
changes other than cfs as well. I'd eliminate absolutely all unknowns
as the first step.

> But I can build a plain 2.6.22 without cfs and one with it and compare
> dmesgs output, if that helps.

Yes. It would definitely be worth while to test a virgin stable kernel,
and then add only cfs with identical config. Dmesg output may not turn
up anything, but eliminating all other variables should either pin the
tail on the donkey (cfs?) or vindicate it, and that's what needs to be
nailed down solidly first.

-Mike

Ingo Molnar

unread,
Jul 16, 2007, 4:10:11 AM7/16/07
to

* Mike Galbraith <efa...@gmx.de> wrote:

> Sending a few seconds of logged /proc/sched_debug will also help get a
> picture of what's happening, and lovely would be a method to reproduce
> the problem locally.

also, by running this script:

http://people.redhat.com/mingo/cfs-scheduler/tools/cfs-debug-info.sh

and sending us the file it produces we'll have most of the environmental
information as well.

Ingo

Ingo Molnar

unread,
Jul 16, 2007, 5:20:07 AM7/16/07
to

* Ed Tomlinson <e...@aei.ca> wrote:

> I run a java application at nice 15. Its been a background
> application here for as long as SD and CFS have been around. If I
> have a compile running at nice 0, with v19 java gets so little cpu
> that the the wrapper that runs to monitor it is timing out waiting for
> it to start. This is new in v19 - something in v19 is not meshing
> well with my mix of applications...

how much longer did the startup of the java app get relative to say v18?

to debug this, could you check whether this problem goes away if you use
nice 10 (or nice 5) instead of nice 15?

Ingo

Ingo Molnar

unread,
Jul 16, 2007, 5:50:10 AM7/16/07
to

* Markus <list...@web.de> wrote:

> [...] The mouse is smooth, just when one app is being quit (dont know

> why...) the mouse will be jerking for a few seconds...

is the mouse jerky on any app quitting? Or is your observation the
following: _sometimes_ apps quit unexpectedly (their window just
vanishes?), and _at the same time_, the mouse becomes jerky as well, for
a few seconds?

the mouse typically only becomes jerky when there's some really high
load on the system - anything else would be a kernel bug. A jerky mouse
on an unloaded system is definitely a sign of some sort of kernel bug
(in or outside of the scheduler). An app vanishing unexpectedly might
mean an OOM-kill - but that would should up in the syslog as well.
Pretty weird.

Can you make this regression trigger arbitrarily, so that we could debug
it better? Apps exiting unexpectedly can be debugged via:

http://kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.22-rc6/2.6.22-rc6-mm1/broken-out/vdso-print-fatal-signals.patch

you can turn it on via the print-fatal-signals=1 boot option or via:

echo 1 > /proc/sys/kernel/print-fatal-signals

this feature will produce a small dump to the syslog about every app
that exits unexpectedly. Note that this might not cover all types of
"window suddenly vanishes" regressions.

Ingo

Ed Tomlinson

unread,
Jul 16, 2007, 7:20:10 AM7/16/07
to
On Monday 16 July 2007 05:17, Ingo Molnar wrote:
>
> * Ed Tomlinson <e...@aei.ca> wrote:
>
> > I run a java application at nice 15. Its been a background
> > application here for as long as SD and CFS have been around. If I
> > have a compile running at nice 0, with v19 java gets so little cpu
> > that the the wrapper that runs to monitor it is timing out waiting for
> > it to start. This is new in v19 - something in v19 is not meshing
> > well with my mix of applications...
>
> how much longer did the startup of the java app get relative to say v18?
>
> to debug this, could you check whether this problem goes away if you use
> nice 10 (or nice 5) instead of nice 15?

Ingo,

It may take a day to two before I get to test this. I have had to revert to 2.6.21 -
it seems that 22 triggers a stall here (21 also can trigger this but its harder)...

Thanks
Ed

Markus

unread,
Jul 16, 2007, 2:00:07 PM7/16/07
to
> > [...] The mouse is smooth, just when one app is being quit (dont
> > know why...) the mouse will be jerking for a few seconds...
> is the mouse jerky on any app quitting?
No.

> Or is your observation the following: _sometimes_ apps quit
> unexpectedly (their window just vanishes?), and _at the same time_,
> the mouse becomes jerky as well, for a few seconds?

Exactly.

> the mouse typically only becomes jerky when there's some really high
> load on the system - anything else would be a kernel bug. A jerky
> mouse on an unloaded system is definitely a sign of some sort of
> kernel bug (in or outside of the scheduler). An app vanishing
> unexpectedly might mean an OOM-kill - but that would should up in the
> syslog as well.
> Pretty weird.

Well, the system uses about 30% of the cpu (cool'n'quite put it on the
lowest frequency).

I made a plain 2.6.22.1 and could use it for about 2 hours without
any problem. Then I applied the cfs-v19 for that kernel, rebuild from
mrproper with the saved config and booted. After a few minutes the
first app vanished... some more followed by time (I just surfed around
a bit...)

The dmesg output is not differing in any interesting point (just some
numbers, like raid-benchmark, some irqs or usb-numbers...)

So its obviously something within cfs... unfortunately...

> Can you make this regression trigger arbitrarily, so that we could
> debug it better? Apps exiting unexpectedly can be debugged via:
>
http://kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.22-rc6/2.6.22-rc6-mm1/broken-out/vdso-print-fatal-signals.patch
>
> you can turn it on via the print-fatal-signals=1 boot option or via:
>
> echo 1 > /proc/sys/kernel/print-fatal-signals
>
> this feature will produce a small dump to the syslog about every app
> that exits unexpectedly. Note that this might not cover all types of
> "window suddenly vanishes" regressions.

Nothing is printed for a disapeared app for me.


Is there anything more I can try?


Markus

Chuck Ebbert

unread,
Jul 16, 2007, 5:40:11 PM7/16/07
to
On 07/13/2007 05:19 PM, Bill Davidsen wrote:
>
> I should really go back to 2.6.21.6, 2.6.22 has many bizarre behaviors
> with FC6. Automount starts taking 30% of CPU (unused at the moment)

Can you confirm whether CFS is involved, i.e. does it spin like that
even without the CFS patch applied?

Ingo Molnar

unread,
Jul 16, 2007, 6:00:12 PM7/16/07
to

* Chuck Ebbert <ceb...@redhat.com> wrote:

> On 07/13/2007 05:19 PM, Bill Davidsen wrote:
> >
> > I should really go back to 2.6.21.6, 2.6.22 has many bizarre behaviors
> > with FC6. Automount starts taking 30% of CPU (unused at the moment)
>
> Can you confirm whether CFS is involved, i.e. does it spin like that
> even without the CFS patch applied?

hmmm .... could you take out the kernel/time.c (sys_time()) changes from
the CFS patch, does that solve the automount issue? If yes, could
someone take a look at automount and check whether it makes use of
time(2) and whether it combines it with finer grained time sources?

Ingo

Bill Davidsen

unread,
Jul 17, 2007, 12:30:08 AM7/17/07
to
Ingo Molnar wrote:
> * Chuck Ebbert <ceb...@redhat.com> wrote:
>
>
>> On 07/13/2007 05:19 PM, Bill Davidsen wrote:
>>
>>> I should really go back to 2.6.21.6, 2.6.22 has many bizarre behaviors
>>> with FC6. Automount starts taking 30% of CPU (unused at the moment)
>>>
>> Can you confirm whether CFS is involved, i.e. does it spin like that
>> even without the CFS patch applied?
>>
>
>
I will try that, but not until Tuesday night. I've been here too long
today and have an out-of-state meeting tomorrow. I'll take a look after
dinner. Note that the latest 2.6.21 with cfs-v19 doesn't have any
problems of any nature, other than suspend to RAM not working, and I may
have the config wrong. Runs really well otherwise, but I'll test drive
2.6.22 w/o the patch.

> hmmm .... could you take out the kernel/time.c (sys_time()) changes from
> the CFS patch, does that solve the automount issue? If yes, could
> someone take a look at automount and check whether it makes use of
> time(2) and whether it combines it with finer grained time sources?
>
>

Will do.

--
bill davidsen <davi...@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979

-

Ian Kent

unread,
Jul 17, 2007, 1:10:05 AM7/17/07
to
On Mon, 2007-07-16 at 23:55 +0200, Ingo Molnar wrote:
> * Chuck Ebbert <ceb...@redhat.com> wrote:
>
> > On 07/13/2007 05:19 PM, Bill Davidsen wrote:
> > >
> > > I should really go back to 2.6.21.6, 2.6.22 has many bizarre behaviors
> > > with FC6. Automount starts taking 30% of CPU (unused at the moment)
> >
> > Can you confirm whether CFS is involved, i.e. does it spin like that
> > even without the CFS patch applied?
>
> hmmm .... could you take out the kernel/time.c (sys_time()) changes from
> the CFS patch, does that solve the automount issue? If yes, could
> someone take a look at automount and check whether it makes use of
> time(2) and whether it combines it with finer grained time sources?

Yes it does and I have two reported bugs so far.

In several places I have code similar to:

wait.tv_sec = time(NULL) + 1;
wait.tv_nsec = 0;

signaled = 0;
while (!signaled) {
status = pthread_cond_timedwait(&cond, &mutex, &wait);
if (status) {
if (status == ETIMEDOUT)
break;
fatal(status);
}
}

lead to automount spinning with strace output a bit like:

futex(0x80034b60, FUTEX_WAKE, 1) = 0
clock_gettime(CLOCK_REALTIME, {1184593936, 130925919}) = 0
time(NULL) = 1184593935
futex(0x80034b60, FUTEX_WAKE, 1) = 0
clock_gettime(CLOCK_REALTIME, {1184593936, 131160876}) = 0
time(NULL) = 1184593935
futex(0x80034b60, FUTEX_WAKE, 1) = 0
clock_gettime(CLOCK_REALTIME, {1184593936, 131377080}) = 0
time(NULL) = 1184593935
futex(0x80034b60, FUTEX_WAKE, 1) = 0
clock_gettime(CLOCK_REALTIME, {1184593936, 131593297}) = 0
time(NULL) = 1184593935
futex(0x80034b60, FUTEX_WAKE, 1) = 0
clock_gettime(CLOCK_REALTIME, {1184593936, 131871792}) = 0

There should be something like:

futex(0x5555557868c4, FUTEX_WAIT, 5321099, {0, 998091311}) = -1 ETIMEDOUT (Connection timed out)

in there I think.

Ian

Ingo Molnar

unread,
Jul 17, 2007, 3:40:07 AM7/17/07
to

* Markus <list...@web.de> wrote:

> The dmesg output is not differing in any interesting point (just some
> numbers, like raid-benchmark, some irqs or usb-numbers...)

could you please send me the cfs-debug-info output nevertheless?

> Nothing is printed for a disapeared app for me.
>
> Is there anything more I can try?

sure - could you start one of those apps via:

strace -ttt -TTT -o trace.log -f <app>

and wait for it to "disappear"? Then compress the trace.log via bzip2 -9
(it's probably going to be a really large file) and send me it?

Ingo

Ingo Molnar

unread,
Jul 17, 2007, 3:50:11 AM7/17/07
to

* Ian Kent <ra...@themaw.net> wrote:

> Yes it does and I have two reported bugs so far.
>
> In several places I have code similar to:
>
> wait.tv_sec = time(NULL) + 1;
> wait.tv_nsec = 0;
>
> signaled = 0;
> while (!signaled) {
> status = pthread_cond_timedwait(&cond, &mutex, &wait);
> if (status) {
> if (status == ETIMEDOUT)
> break;
> fatal(status);
> }
> }

ah! It passes in a low-res time source into a high-res time interface
(pthread_cond_timedwait()). Could you change the time(NULL) + 1 to
time(NULL) + 2, or change it to:

gettimeofday(&wait, NULL);
wait.tv_sec++;

does this solve the spinning?

i'm wondering how widespread this is. If automount is the only app doing
this then _maybe_ we could get away with it by changing automount?

Ingo

Ian Kent

unread,
Jul 17, 2007, 7:30:09 AM7/17/07
to
On Tue, 2007-07-17 at 09:45 +0200, Ingo Molnar wrote:
> * Ian Kent <ra...@themaw.net> wrote:
>
> > Yes it does and I have two reported bugs so far.
> >
> > In several places I have code similar to:
> >
> > wait.tv_sec = time(NULL) + 1;
> > wait.tv_nsec = 0;
> >
> > signaled = 0;
> > while (!signaled) {
> > status = pthread_cond_timedwait(&cond, &mutex, &wait);
> > if (status) {
> > if (status == ETIMEDOUT)
> > break;
> > fatal(status);
> > }
> > }
>
> ah! It passes in a low-res time source into a high-res time interface
> (pthread_cond_timedwait()). Could you change the time(NULL) + 1 to
> time(NULL) + 2, or change it to:
>
> gettimeofday(&wait, NULL);
> wait.tv_sec++;

OK, I'm with you, hi-res timer.
But even so, how is the time in the past after adding a second.

Is it because I'm not setting tv_nsec when it's close to a second
boundary, and hence your recommendation above?

>
> does this solve the spinning?

I don't have a system to test this on so I'll try to get one of the
people that logged the problem to test a patch.

>
> i'm wondering how widespread this is. If automount is the only app doing
> this then _maybe_ we could get away with it by changing automount?

I'm happy to change automount but that could cause odd version specific
problems for people updating their kernel on an older installed base.

Aaah .. and they'll all blame me!! ;)

Ian

Markus

unread,
Jul 17, 2007, 9:10:08 AM7/17/07
to
> could you please send me the cfs-debug-info output nevertheless?
private mail (4,9K)

> > Nothing is printed for a disapeared app for me.
> >
> > Is there anything more I can try?
>
> sure - could you start one of those apps via:
>
> strace -ttt -TTT -o trace.log -f <app>
>
> and wait for it to "disappear"? Then compress the trace.log via
bzip2 -9
> (it's probably going to be a really large file) and send me it?

private mail, aswell (187K)

When attachments are allowed, I can resend them on the list as well (or
just ask me...)


To answer a private mail: I do not use any kernel-module thats not part
of the official kernel!
And of course nothing proprietary
# cat /proc/sys/kernel/tainted

0

I used gcc-4.1.2 (glibc-2.5-r4) to build the kernels. (Its a amd64
system, quite stable so far.)

Programs that "disappeared" are most graphical, because others I have
not noticed so far... also [1] might be caused by this...
amarok, kdesktop, whole X, konqueror, konsole but also gtk-apps


Markus


[1] http://lkml.org/lkml/2007/07/14/64

Chuck Ebbert

unread,
Jul 17, 2007, 12:40:14 PM7/17/07
to
On 07/17/2007 03:45 AM, Ingo Molnar wrote:
> * Ian Kent <ra...@themaw.net> wrote:
>
>> Yes it does and I have two reported bugs so far.
>>
>> In several places I have code similar to:
>>
>> wait.tv_sec = time(NULL) + 1;
>> wait.tv_nsec = 0;
>>
>> signaled = 0;
>> while (!signaled) {
>> status = pthread_cond_timedwait(&cond, &mutex, &wait);
>> if (status) {
>> if (status == ETIMEDOUT)
>> break;
>> fatal(status);
>> }
>> }
>
> ah! It passes in a low-res time source into a high-res time interface
> (pthread_cond_timedwait()). Could you change the time(NULL) + 1 to
> time(NULL) + 2, or change it to:
>
> gettimeofday(&wait, NULL);
> wait.tv_sec++;
>
> does this solve the spinning?
>
> i'm wondering how widespread this is. If automount is the only app doing
> this then _maybe_ we could get away with it by changing automount?

Odds are there's at least one other app doing that somewhere.

Would reverting the CFS changes to time.c fix this problem?
That optimization just got merged in 2.6.22 mainline...

Ingo Molnar

unread,
Jul 17, 2007, 1:10:09 PM7/17/07
to

* Markus <list...@web.de> wrote:

> > > Nothing is printed for a disapeared app for me.
> > >
> > > Is there anything more I can try?
> >
> > sure - could you start one of those apps via:
> >
> > strace -ttt -TTT -o trace.log -f <app>
> >
> > and wait for it to "disappear"? Then compress the trace.log via
> > bzip2 -9 (it's probably going to be a really large file) and send me
> > it?
> private mail, aswell (187K)

i think it fails here due to some IO error:

9173 1184675906.674610 write(2, "konqueror: Fatal IO error: clien"..., 41) = 41 <0.000007>

could this be due to:

9173 1184675906.194424 ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff341af5c0)
= -1 ENOTTY (Inappropriate ioctl for device) <0.000006>
9173 1184675906.194463 ioctl(2, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fff341af5c0)
= -1 ENOTTY (Inappropriate ioctl for device) <0.000004>

? Are those -ENOTTY results normal?

or perhaps the problem started alot earlier, at:

9173 1184675906.155015 write(2, "In file kernel/qpixmap_x11.cpp, "..., 56) = 56 <0.000006>
9173 1184675906.155052 write(2, "QImage::convertDepth: Image is a"..., 44) = 44 <0.000004>
9173 1184675906.155169 gettimeofday({1184675906, 155179}, NULL) = 0 <0.000006>
9173 1184675906.155249 write(11, "close(6f1c2f7):about:konqueror\n", 31) = 31 <0.000032>

i think konqueror tried to say something here about an image problem?

could you perhaps upload the strace to some webpage so that others can
take a look too?

it might also be good to add "-s 1000" to the strace command, so that we
can see the full messages that konqueror tried to log to some other
task, i.e.:

strace -s 1000 -ttt -TTT -o trace.log -f <app>

and perhaps try to do a 'comparison' trace.normal.log as well, with
konqueror having no problems. Also a KDE expert's advice would be useful
here too i guess ...

Ingo

Ingo Molnar

unread,
Jul 17, 2007, 1:20:10 PM7/17/07
to

* Ian Kent <ra...@themaw.net> wrote:

> > ah! It passes in a low-res time source into a high-res time interface
> > (pthread_cond_timedwait()). Could you change the time(NULL) + 1 to
> > time(NULL) + 2, or change it to:
> >
> > gettimeofday(&wait, NULL);
> > wait.tv_sec++;
>
> OK, I'm with you, hi-res timer.
> But even so, how is the time in the past after adding a second.
>
> Is it because I'm not setting tv_nsec when it's close to a second
> boundary, and hence your recommendation above?

yeah, it looks a bit suspicious: you create a +1 second timeout out of a
1 second resolution timesource. I dont yet understand the failure mode
though that results in that looping and in the 30% CPU time use - do you
understand it perhaps? (and automount is still functional while this is
happening, correct?)

Ingo

Ingo Molnar

unread,
Jul 17, 2007, 1:20:16 PM7/17/07
to

* Ingo Molnar <mi...@elte.hu> wrote:

> i think it fails here due to some IO error:
>
> 9173 1184675906.674610 write(2, "konqueror: Fatal IO error:
> clien"..., 41) = 41 <0.000007>

oh, and i missed the obvious request: could you start konqueror from a
terminal and see what it prints when it goes down?

Markus

unread,
Jul 17, 2007, 4:00:11 PM7/17/07
to
> 9173 1184675906.194424 ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS,
0x7fff341af5c0)
> = -1 ENOTTY (Inappropriate ioctl for device) <0.000006>
> 9173 1184675906.194463 ioctl(2, SNDCTL_TMR_TIMEBASE or TCGETS,
0x7fff341af5c0)
> = -1 ENOTTY (Inappropriate ioctl for device) <0.000004>
>
> ? Are those -ENOTTY results normal?
Yes, I see it on any kernel.


> 9173 1184675906.155015 write(2, "In file
kernel/qpixmap_x11.cpp, "..., 56) = 56 <0.000006>
> 9173 1184675906.155052 write(2, "QImage::convertDepth: Image is
a"..., 44) = 44 <0.000004>
> 9173 1184675906.155169 gettimeofday({1184675906, 155179}, NULL) = 0
<0.000006>
> 9173 1184675906.155249 write(11, "close(6f1c2f7):about:konqueror\n",
31) = 31 <0.000032>
>
> i think konqueror tried to say something here about an image problem?
Well, yes:
In file kernel/qpixmap_x11.cpp, line 633: Out of memory
QImage::convertDepth: Image is a null image
In file kernel/qpixmap_x11.cpp, line 633: Out of memory
QImage::convertDepth: Image is a null image
In file kernel/qpixmap_x11.cpp, line 633: Out of memory
QImage::convertDepth: Image is a null image
In file kernel/qpixmap_x11.cpp, line 633: Out of memory
QImage::convertDepth: Image is a null image
konqueror: Fatal IO error: client killed

And no, my 2 GB of RAM are not full:
$ free -m
total used free shared buffers
cached
Mem: 2012 1077 935 0 22
441
-/+ buffers/cache: 612 1400
Swap: 2070 0 2070

> could you perhaps upload the strace to some webpage so that others can
> take a look too?

hm, I dont have any webspace...

> it might also be good to add "-s 1000" to the strace command, so that
we
> can see the full messages that konqueror tried to log to some other
> task, i.e.:
>
> strace -s 1000 -ttt -TTT -o trace.log -f <app>
>
> and perhaps try to do a 'comparison' trace.normal.log as well, with
> konqueror having no problems.

I now made some new strace logs:
- konq crash 251K
- Konq without crash on cfs 302K
- konq without crash on non-cfs 248K


Markus

Ingo Molnar

unread,
Jul 17, 2007, 4:20:09 PM7/17/07
to

* Markus <list...@web.de> wrote:

> > i think konqueror tried to say something here about an image problem?

> Well, yes:
> In file kernel/qpixmap_x11.cpp, line 633: Out of memory
> QImage::convertDepth: Image is a null image
> In file kernel/qpixmap_x11.cpp, line 633: Out of memory
> QImage::convertDepth: Image is a null image
> In file kernel/qpixmap_x11.cpp, line 633: Out of memory
> QImage::convertDepth: Image is a null image
> In file kernel/qpixmap_x11.cpp, line 633: Out of memory
> QImage::convertDepth: Image is a null image
> konqueror: Fatal IO error: client killed

i think the problem starts here:

11902 1184699865.141939 read(3, "", 32) = 0 <0.000007>

file descriptor 3 is the Unix domain socket connection to X:

11902 1184699521.012316 socket(PF_FILE, SOCK_STREAM, 0) = 3 <0.000018>

a read result of 0 confused Konqueror so it quit immediately. I've
attached the relevant section below - does it make sense to anyone who
knows more about X, KDE and Konqueror internals than me? :)

Ingo

----------------------->
11902 1184699811.587247 gettimeofday({1184699811, 587257}, NULL) = 0 <0.000007>
11902 1184699811.587280 select(10, [3 4 5 7 9], [], [], {9, 999727}) = 1 (in [3], left {7, 946666}) <2.054594>
11902 1184699813.642502 gettimeofday({1184699813, 642849}, NULL) = 0 <0.000339>
11902 1184699813.643145 ioctl(3, FIONREAD, [32]) = 0 <0.000343>
11902 1184699813.643720 read(3, "\v\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0", 32) = 32 <0.000219>
11902 1184699813.644271 ioctl(3, FIONREAD, [0]) = 0 <0.000179>
11902 1184699813.644724 ioctl(3, FIONREAD, [0]) = 0 <0.000143>
11902 1184699813.645011 gettimeofday({1184699813, 645265}, NULL) = 0 <0.000248>
11902 1184699813.645448 select(10, [3 4 5 7 9], [], [], {7, 941719}) = 1 (in [3], left {0, 703333}) <7.238595>
11902 1184699820.884750 gettimeofday({1184699820, 884911}, NULL) = 0 <0.000155>
11902 1184699820.885279 ioctl(3, FIONREAD, [224]) = 0 <0.000339>
11902 1184699820.885960 read(3, "\34a\301!\5\0\200\4\365\0\0\0\210+\10\0\0\352\323\0\0\0\0\0\347 C\0\0\0\0\0\v\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\v\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\34a\301![\0\0\0V\1\0\0\211+\10\0\0mp\0\0\0\0\0\347 C\0\0\0\0\0\34a\301![\0\0\0V\1\0\0\211+\10\0\0mp\0\0\0\0\0\347 C\0\0\0\0\0\v\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\34a\301![\0\0\0002\1\0\0\211+\10\0\0mp\0\0\0\0\0\347 C\0\0\0\0\0", 224) = 224 <0.000475>
11902 1184699820.886800 write(3, "\24\0\6\0\5\0\200\4\365\0\0\0\4\0\0\0\0\0\0\0\0\4\0\0", 24) = 24 <0.000392>
11902 1184699820.887445 read(3, 0x7fffb5b9c8b0, 32) = -1 EAGAIN (Resource temporarily unavailable) <0.000338>
11902 1184699820.888119 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, -1) = 1 <0.022256>
11902 1184699820.910435 read(3, "\34a\301![\0\0\0V\1\0\0\241+\10\0\0mp\0\0\0\0\0\347 C\0\0\0\0\0", 32) = 32 <0.000010>
11902 1184699820.910506 read(3, 0x7fffb5b9c8b0, 32) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
11902 1184699820.910557 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, -1) = 1 <0.073035>
11902 1184699820.983650 read(3, "\34~\301!\5\0\200\4a\1\0\0\357+\10\0\1\r.\2\0\0\0\0`\257\235\0\0\0\0\0", 32) = 32 <0.000011>
11902 1184699820.983722 read(3, 0x7fffb5b9c8b0, 32) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
11902 1184699820.983773 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, -1) = 1 <0.114041>
11902 1184699821.097872 read(3, "\34a\301!\5\0\200\4a\1\0\0Z,\10\0\0\352\323\0\0\0\0\0\347 C\0\0\0\0\0", 32) = 32 <0.000011>
11902 1184699821.097943 read(3, 0x7fffb5b9c8b0, 32) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
11902 1184699821.097994 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, -1) = 1 <0.006383>
11902 1184699821.104428 read(3, "\1 \302!\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0b\340T\0\0\0\0\0", 32) = 32 <0.000010>
11902 1184699821.104519 ioctl(3, FIONREAD, [0]) = 0 <0.000010>
11902 1184699821.104571 ioctl(3, FIONREAD, [0]) = 0 <0.000009>
11902 1184699821.104626 gettimeofday({1184699821, 104643}, NULL) = 0 <0.000011>
11902 1184699821.104683 select(10, [3 4 5 7 9], [], [], {0, 482341}) = 0 (Timeout) <0.480889>
11902 1184699821.585719 gettimeofday({1184699821, 585738}, NULL) = 0 <0.000010>
11902 1184699821.585789 ioctl(3, FIONREAD, [0]) = 0 <0.000010>
11902 1184699821.585846 gettimeofday({1184699821, 585862}, NULL) = 0 <0.000009>
11902 1184699821.585900 select(10, [3 4 5 7 9], [], [], {0, 1122}) = 0 (Timeout) <0.003013>
11902 1184699821.588979 gettimeofday({1184699821, 588995}, NULL) = 0 <0.000010>
11902 1184699821.589065 ioctl(3, FIONREAD, [0]) = 0 <0.000010>
11902 1184699821.589121 gettimeofday({1184699821, 589137}, NULL) = 0 <0.000010>
11902 1184699821.589175 select(10, [3 4 5 7 9], [], [], {9, 997847}) = 0 (Timeout) <9.998225>
11902 1184699831.587540 gettimeofday({1184699831, 587594}, NULL) = 0 <0.000010>
11902 1184699831.587673 ioctl(3, FIONREAD, [0]) = 0 <0.000009>
11902 1184699831.587735 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <16.674843>
11902 1184699848.263072 ioctl(3, FIONREAD, [224]) = 0 <0.000191>
11902 1184699848.263658 read(3, "\34a\302!\5\0\200\4\365\0\0\0{\226\10\0\0\352\323\0\0\0\0\0\347 C\0\0\0\0\0\v\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\v\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\34a\302![\0\0\0V\1\0\0|\226\10\0\0mp\0\0\0\0\0\347 C\0\0\0\0\0\34a\302![\0\0\0V\1\0\0|\226\10\0\0mp\0\0\0\0\0\347 C\0\0\0\0\0\v\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\34a\302![\0\0\0002\1\0\0|\226\10\0\0mp\0\0\0\0\0\347 C\0\0\0\0\0", 224) = 224 <0.000500>
11902 1184699848.264825 write(3, "\24\0\6\0\5\0\200\4\365\0\0\0\4\0\0\0\0\0\0\0\0\4\0\0", 24) = 24 <0.000968>
11902 1184699848.266093 read(3, 0x7fffb5b9c8b0, 32) = -1 EAGAIN (Resource temporarily unavailable) <0.000283>
11902 1184699848.266933 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, -1) = 1 <0.069502>
11902 1184699848.336494 read(3, "\34a\302![\0\0\0V\1\0\0\305\226\10\0\0mp\0\0\0\0\0\347 C\0\0\0\0\0", 32) = 32 <0.000010>
11902 1184699848.336565 read(3, 0x7fffb5b9c8b0, 32) = -1 EAGAIN (Resource temporarily unavailable) <0.000008>
11902 1184699848.336616 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, -1) = 1 <0.140445>
11902 1184699848.477120 read(3, "\34\0\302!\5\0\200\4a\1\0\0U\227\10\0\1t\31\1\0\0\0\0`\257\235\0\0\0\0\0", 32) = 32 <0.000011>
11902 1184699848.477192 read(3, 0x7fffb5b9c8b0, 32) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
11902 1184699848.477242 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, -1) = 1 <0.137828>
11902 1184699848.615128 read(3, "\34a\302!\5\0\200\4a\1\0\0\333\227\10\0\0\352\323\0\0\0\0\0\347 C\0\0\0\0\0", 32) = 32 <0.000011>
11902 1184699848.615201 read(3, 0x7fffb5b9c8b0, 32) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
11902 1184699848.615252 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, -1) = 1 <0.009307>
11902 1184699848.624614 read(3, "\1 \303!\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0b\340T\0\0\0\0\0", 32) = 32 <0.000011>
11902 1184699848.624710 ioctl(3, FIONREAD, [0]) = 0 <0.000009>
11902 1184699848.624762 ioctl(3, FIONREAD, [0]) = 0 <0.000048>
11902 1184699848.624866 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <16.495008>
11902 1184699865.119950 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.119988 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120031 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120065 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120106 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120140 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.120181 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120214 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120255 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.120289 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120330 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120364 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120405 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120438 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120479 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.120513 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120554 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.120587 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120628 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120662 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120703 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120736 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120777 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.120839 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120881 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120915 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120956 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120990 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000008>
11902 1184699865.121030 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121064 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121105 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121156 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121198 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121232 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121273 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.121307 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121348 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.121382 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121423 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
11902 1184699865.121504 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000008>
11902 1184699865.121546 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121584 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121626 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121659 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121704 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.121738 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121780 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.121813 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121855 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.121889 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121930 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121964 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122005 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122038 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122080 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122113 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122154 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
11902 1184699865.122189 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122231 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122278 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000015>
11902 1184699865.122329 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122362 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.122403 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122437 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122478 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122512 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122553 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.122587 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122628 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.122662 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122703 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.122737 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122778 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122811 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122865 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.122900 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122941 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122975 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123031 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123065 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123106 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.123140 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123181 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123215 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123256 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.123290 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123331 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123365 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123406 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123440 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123481 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123515 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123556 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123589 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.123631 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.123665 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123705 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.123739 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123780 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.123814 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123855 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123889 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123930 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123964 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124005 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124060 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000008>
11902 1184699865.124102 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124135 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124177 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124211 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124255 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124289 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124330 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124363 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000008>
11902 1184699865.124405 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124439 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124479 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124513 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124554 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124588 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124629 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124662 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124703 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124737 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124778 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124811 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124865 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124899 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124941 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124974 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125015 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125049 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125090 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125136 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000008>
11902 1184699865.125178 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.125212 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125254 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.125287 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125329 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125363 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125404 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.125440 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125482 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125516 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125557 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.125591 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000044>
11902 1184699865.125706 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125740 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125781 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.125815 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125856 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125889 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125930 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125964 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126005 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126039 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126080 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126113 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.126154 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.126188 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126229 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.126263 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126304 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126338 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126379 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126413 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126454 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126488 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126529 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.126563 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126604 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126638 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126679 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.126712 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126753 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.126787 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126828 ioctl(3, FIONREAD, [0]) = 0 <0.000016>
11902 1184699865.126873 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126914 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126948 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126988 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.127039 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.127083 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.127117 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.127159 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.127193 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.127234 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.127267 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000011>
11902 1184699865.127325 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.127359 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000483>
11902 1184699865.127882 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.127917 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.127957 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.127990 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128030 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128063 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.128102 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128135 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128175 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128207 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128247 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128279 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128320 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128352 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.128391 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128423 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128463 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128495 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128535 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.128568 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128607 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128639 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.128679 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128711 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128751 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128783 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128823 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128870 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.128910 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128998 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129038 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129071 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129111 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129143 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129183 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129215 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129254 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129287 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129326 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.129359 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129399 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129432 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129471 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129503 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129543 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.129575 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129614 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129647 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129686 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129718 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129758 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.129790 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129830 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.129876 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129915 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129948 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129987 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130021 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.130061 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130094 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130133 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130166 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.130206 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.130239 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130278 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130311 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.130351 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.130384 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.130423 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130456 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130496 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.130530 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130569 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130602 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130641 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130674 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.130714 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130747 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130786 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130819 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130870 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130902 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130942 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130974 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131014 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131046 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131087 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131120 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131160 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131192 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131232 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131264 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131304 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.131336 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131376 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.131408 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131448 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131480 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131519 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131552 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131591 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131624 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131663 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131696 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131735 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131768 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131807 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131840 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131892 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.131924 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131964 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.131996 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.132036 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.132070 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132109 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132142 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.132182 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132215 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.132255 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.132298 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.132338 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132370 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132409 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132442 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132481 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.132513 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132552 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132585 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132624 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.132656 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.132696 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132728 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132768 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132801 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132840 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.132884 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132923 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132956 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132995 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.133027 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.133067 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133099 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133138 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133171 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.133210 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133242 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133281 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133333 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133374 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133406 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.133445 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133478 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133517 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133553 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.133592 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133624 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133663 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133696 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.133735 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133767 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133806 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133838 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133878 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.133923 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133963 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133995 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.134035 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.134067 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.134107 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134141 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134180 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134213 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134252 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134285 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134324 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.134356 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134396 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134428 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.134469 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.134502 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134541 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134574 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134613 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134646 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134685 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134718 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134757 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134790 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134830 ioctl(3, FIONREAD, [0]) = 0 <0.000017>
11902 1184699865.134874 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000021>
11902 1184699865.134931 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134964 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135004 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135036 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135075 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135108 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000010>
11902 1184699865.135151 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135184 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135223 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135256 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.135295 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135328 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135370 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135403 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135442 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135475 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135515 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.135547 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.135587 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135631 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.135671 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135703 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135742 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135775 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135814 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.135846 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.135886 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135918 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.135971 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136003 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.136043 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136075 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.136115 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136147 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136189 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136223 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136263 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136295 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136335 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136367 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.136407 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136439 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136479 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136511 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136551 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136583 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136623 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136655 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136694 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136727 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136766 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136799 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136838 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.136884 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136923 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136956 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136996 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137028 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137067 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137100 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137139 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137172 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137211 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137243 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137283 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137316 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137356 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137388 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137428 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137460 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.137500 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137532 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.137572 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137604 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.137644 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137676 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137716 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137748 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137787 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137820 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137861 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137893 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137932 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.137979 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138018 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138051 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138090 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138122 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138161 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.138193 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138232 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138266 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138322 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138355 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138394 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138426 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138465 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138498 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138537 ioctl(3, FIONREAD, [0]) = 0 <0.000008>
11902 1184699865.138572 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138612 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138644 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138683 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138715 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138754 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138786 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138825 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138871 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138910 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141180 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.141222 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.141254 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.141292 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141324 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.141362 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141393 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.141432 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141463 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.141502 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141533 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.141572 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141603 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.141641 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.141673 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.141711 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141742 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.141781 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141809 poll([{fd=3, events=POLLIN, revents=POLLIN|POLLHUP}], 1, 0) = 1 <0.000005>
11902 1184699865.141838 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141939 read(3, "", 32) = 0 <0.000007>
11902 1184699865.142120 access("/usr/kde/3.5/share/apps/konqueror/icons/crystalsvg/16x16/actions/folder.png", R_OK) = -1 ENOENT (No such file or directory) <0.000019>
11902 1184699865.142185 access("/usr/kde/3.5/share/icons/crystalsvg/16x16/actions/folder.png", R_OK) = -1 ENOENT (No such file or directory) <0.000010>
11902 1184699865.142231 access("/usr/kde/3.5/share/icons/crystalsvg/16x16/actions/folder.png", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.142287 access("/usr/kde/3.5/share/icons/crystalsvg/16x16/apps/folder.png", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.142348 access("/usr/kde/3.5/share/icons/crystalsvg/16x16/apps/folder.png", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.142391 access("/usr/kde/3.5/share/icons/crystalsvg/16x16/devices/folder.png", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.142434 access("/usr/kde/3.5/share/icons/crystalsvg/16x16/devices/folder.png", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.142477 access("/usr/kde/3.5/share/icons/crystalsvg/16x16/filesystems/folder.png", R_OK) = 0 <0.000010>
11902 1184699865.142556 open("/usr/kde/3.5/share/icons/crystalsvg/16x16/filesystems/folder.png", O_RDONLY) = 12 <0.000013>
11902 1184699865.142600 fstat(12, {st_mode=S_IFREG|0644, st_size=594, ...}) = 0 <0.000006>
11902 1184699865.142650 fstat(12, {st_mode=S_IFREG|0644, st_size=594, ...}) = 0 <0.000004>
11902 1184699865.142688 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x2b00fb935000 <0.000010>
11902 1184699865.142721 lseek(12, 0, SEEK_SET) = 0 <0.000005>
11902 1184699865.142761 read(12, "\211PNG\r\n\32\n\0\0\0\rIHDR\0\0\0\20\0\0\0\20\10\6\0\0\0\37\363\377a\0\0\0\tpHYs\0\0\v\23\0\0\v\23\1\0\232\234\30\0\0\0\4gAMA\0\0\261\216|\373Q\223\0\0\0 cHRM\0\0z%\0\0\200\203\0\0\371\377\0\0\200\351\0\0u0\0\0\352`\0\0:\230\0\0\27o\222_\305F\0\0\1\310IDATx\332bd\200\2\243\222\333\377\31\376300\2\331\377\31\301$X\374\\\217*#\3\36\0\20@`dXp\355\377\327\337\177\377\177\375\371\27AC1H\16\237^\200\0b1\310\275\364\177{\253:\303\347_P\21\250\34507\350\361\336f`\310\275\370\377?#H\214\221\1\354L\250\374\371\311\372\214\0\1\304\302\360\367\27\303\347\337\230&\377\377\7T\304\304\300PS\350\3\3413@\370\f@q\6&\10?\354\357\231\377\0\1\304\362\377\317/\206\17?\360\270\21hUT\331\25\34\222\254\f\0\1\304\362\37\350\202wP\3473\3\361_\30\375\37Bg\326]a\250L\227e\2103\343G\321\272\341*\3Ce\337\25\6\200\0b\371\377\357/\304\5@\233jZ0mJ\214\222\6k\236p\20h(,>\220\202\25 \200\200.\370\315\360\341\'\3Cg\3339\6\317@#\206\211A\230\16\315_\213\10\\d\360\367\327o\6\200\0\2\206\301O\206\367@\27\374\375\371\213\341\351;\6\206\3409\10[@\311\341?f\220@\304\200\304\237\237?\31\0\2\10\354\205O@\27\374\375\376\203\341\345{\250\nx\260#\331\212\234\234\240r =\0\1\4\16DP\32\370\3\344\274\377\212C3\232\20\314\254\277\337~0\0\4\0208\f\276\2]\360\373\373W\206\253\215\fD\203\215\33O2\244\356\371\316\0\20@L\f\377\376\300M#\25\374\375\366\215\1 \200\200.\370\303\360\366;\320\5?\276\203M%\5\374\1\352\1\10 \260w\324\275\247\377\177t\351\27jP#\373\32\234\21\320B\23\310\375\3768\237\21 \200\30\377\377\377\317@\t\0\0100\0\303\210\322P:\270\222\\\0\0\0\0IEND\256B`\202", 4096) = 594 <0.000013>
11902 1184699865.143043 close(12) = 0 <0.000007>
11902 1184699865.143072 munmap(0x2b00fb935000, 4096) = 0 <0.000010>
11902 1184699865.143175 gettimeofday({1184699865, 143185}, NULL) = 0 <0.000007>
11902 1184699865.143413 ioctl(5, FIONREAD, [0]) = 0 <0.000008>
11902 1184699865.143446 write(6, "\0", 1) = 1 <0.000007>
11902 1184699865.143484 gettimeofday({1184699865, 143494}, NULL) = 0 <0.000007>
11902 1184699865.143553 gettimeofday({1184699865, 143563}, NULL) = 0 <0.000006>
11902 1184699865.143631 gettimeofday({1184699865, 143641}, NULL) = 0 <0.000006>
11902 1184699865.143679 ioctl(5, FIONREAD, [1]) = 0 <0.000007>
11902 1184699865.143840 write(2, "In file kernel/qpixmap_x11.cpp, line 633: Out of memory\n", 56) = 56 <0.000110>
11902 1184699865.143988 write(2, "QImage::convertDepth: Image is a null image\n", 44) = 44 <0.000047>
11902 1184699865.144080 write(2, "In file kernel/qpixmap_x11.cpp, line 633: Out of memory\n", 56) = 56 <0.000046>
11902 1184699865.144161 write(2, "QImage::convertDepth: Image is a null image\n", 44) = 44 <0.000044>
11902 1184699865.144242 write(2, "In file kernel/qpixmap_x11.cpp, line 633: Out of memory\n", 56) = 56 <0.000045>
11902 1184699865.144322 write(2, "QImage::convertDepth: Image is a null image\n", 44) = 44 <0.000046>
11902 1184699865.144569 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.144618 ioctl(5, FIONREAD, [1]) = 0 <0.000007>
11902 1184699865.144671 ioctl(5, FIONREAD, [1]) = 0 <0.000007>
11902 1184699865.144752 gettimeofday({1184699865, 144763}, NULL) = 0 <0.000006>
11902 1184699865.144981 gettimeofday({1184699865, 144993}, NULL) = 0 <0.000007>
11902 1184699865.145041 gettimeofday({1184699865, 145051}, NULL) = 0 <0.000007>
11902 1184699865.145189 gettimeofday({1184699865, 145200}, NULL) = 0 <0.000006>
11902 1184699865.145272 gettimeofday({1184699865, 145283}, NULL) = 0 <0.000006>
11902 1184699865.145372 gettimeofday({1184699865, 145383}, NULL) = 0 <0.000006>
11902 1184699865.145450 gettimeofday({1184699865, 145461}, NULL) = 0 <0.000006>
11902 1184699865.145549 gettimeofday({1184699865, 145560}, NULL) = 0 <0.000006>
11902 1184699865.145703 gettimeofday({1184699865, 145715}, NULL) = 0 <0.000007>
11902 1184699865.145807 gettimeofday({1184699865, 145817}, NULL) = 0 <0.000006>
11902 1184699865.145909 gettimeofday({1184699865, 145920}, NULL) = 0 <0.000007>
11902 1184699865.146006 gettimeofday({1184699865, 146017}, NULL) = 0 <0.000006>
11902 1184699865.146126 gettimeofday({1184699865, 146137}, NULL) = 0 <0.000006>
11902 1184699865.146244 gettimeofday({1184699865, 146255}, NULL) = 0 <0.000006>
11902 1184699865.146281 gettimeofday({1184699865, 146291}, NULL) = 0 <0.000007>
11902 1184699865.146317 gettimeofday({1184699865, 146327}, NULL) = 0 <0.000006>
11902 1184699865.146353 gettimeofday({1184699865, 146363}, NULL) = 0 <0.000006>
11902 1184699865.146389 gettimeofday({1184699865, 146399}, NULL) = 0 <0.000006>
11902 1184699865.146425 gettimeofday({1184699865, 146434}, NULL) = 0 <0.000007>
11902 1184699865.146756 gettimeofday({1184699865, 146767}, NULL) = 0 <0.000007>
11902 1184699865.146983 write(2, "In file kernel/qpixmap_x11.cpp, line 633: Out of memory\n", 56) = 56 <0.000089>
11902 1184699865.147110 write(2, "QImage::convertDepth: Image is a null image\n", 44) = 44 <0.000045>
11902 1184699865.147270 gettimeofday({1184699865, 147281}, NULL) = 0 <0.000006>
11902 1184699865.147359 write(11, "close(73ba3c08):about:konqueror\n", 32) = 32 <0.020144>
11902 1184699865.167842 gettimeofday({1184699865, 167855}, NULL) = 0 <0.001126>
11902 1184699865.169498 gettimeofday({1184699865, 169511}, NULL) = 0 <0.000007>
11902 1184699865.169757 gettimeofday({1184699865, 169769}, NULL) = 0 <0.000006>
11902 1184699865.171117 ioctl(5, FIONREAD, [1]) = 0 <0.000008>
11902 1184699865.171179 ioctl(5, FIONREAD, [1]) = 0 <0.000016>
11902 1184699865.171394 gettimeofday({1184699865, 171406}, NULL) = 0 <0.000006>
11902 1184699865.171468 gettimeofday({1184699865, 171479}, NULL) = 0 <0.000006>
11902 1184699865.171891 futex(0x2b00f9963f10, FUTEX_WAKE, 2147483647) = 0 <0.000006>
11902 1184699865.172057 futex(0x2b00f972a350, FUTEX_WAKE, 2147483647) = 0 <0.000006>
11902 1184699865.208209 gettimeofday({1184699865, 208282}, NULL) = 0 <0.000044>
11902 1184699865.208337 gettimeofday({1184699865, 208348}, NULL) = 0 <0.000007>
11902 1184699865.208404 gettimeofday({1184699865, 208416}, NULL) = 0 <0.000007>
11902 1184699865.208453 gettimeofday({1184699865, 208463}, NULL) = 0 <0.000007>
11902 1184699865.208492 gettimeofday({1184699865, 208503}, NULL) = 0 <0.000007>
11902 1184699865.208551 gettimeofday({1184699865, 208561}, NULL) = 0 <0.000006>
11902 1184699865.208643 ioctl(5, FIONREAD, [1]) = 0 <0.000007>
11902 1184699865.208689 gettimeofday({1184699865, 208699}, NULL) = 0 <0.000006>
11902 1184699865.208782 munmap(0x2b00f9b67000, 60718) = 0 <0.000017>
11902 1184699865.208825 munmap(0x2b00f9b39000, 187022) = 0 <0.000031>
11902 1184699865.208881 munmap(0x2b00f9d70000, 148286) = 0 <0.000009>
11902 1184699865.208913 munmap(0x2b00f9d95000, 5123) = 0 <0.000021>
11902 1184699865.208967 munmap(0x2b00f9d97000, 995) = 0 <0.000008>
11902 1184699865.208998 munmap(0x2b00f9d98000, 9225) = 0 <0.000007>
11902 1184699865.209029 munmap(0x2b00f9d9b000, 959) = 0 <0.000006>
11902 1184699865.209059 munmap(0x2b00f9d9c000, 1176) = 0 <0.000007>
11902 1184699865.209089 munmap(0x2b00f9d9d000, 1707) = 0 <0.000008>
11902 1184699865.209120 munmap(0x2b00f9d9e000, 558) = 0 <0.000007>
11902 1184699865.209150 munmap(0x2b00f9d9f000, 1212) = 0 <0.000008>
11902 1184699865.209224 access("/home/markus/.kde3.5/share/locale/de/LC_MESSAGES/konqueror.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000015>
11902 1184699865.209282 access("/usr/kde/3.5/share/locale/de/LC_MESSAGES/konqueror.mo", R_OK) = 0 <0.000013>
11902 1184699865.209325 stat("/usr/kde/3.5/share/locale/de/LC_MESSAGES/konqueror.mo", {st_mode=S_IFREG|0644, st_size=60718, ...}) = 0 <0.000011>
11902 1184699865.209397 access("/home/markus/.kde3.5/share/locale/de/LC_MESSAGES/kdelibs.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.209441 access("/usr/kde/3.5/share/locale/de/LC_MESSAGES/kdelibs.mo", R_OK) = 0 <0.000010>
11902 1184699865.209479 stat("/usr/kde/3.5/share/locale/de/LC_MESSAGES/kdelibs.mo", {st_mode=S_IFREG|0644, st_size=187022, ...}) = 0 <0.000009>
11902 1184699865.209543 access("/home/markus/.kde3.5/share/locale/de/LC_MESSAGES/kio.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.209585 access("/usr/kde/3.5/share/locale/de/LC_MESSAGES/kio.mo", R_OK) = 0 <0.000009>
11902 1184699865.209623 stat("/usr/kde/3.5/share/locale/de/LC_MESSAGES/kio.mo", {st_mode=S_IFREG|0644, st_size=148286, ...}) = 0 <0.000009>
11902 1184699865.209686 access("/home/markus/.kde3.5/share/locale/de/LC_MESSAGES/libkonq.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.209729 access("/usr/kde/3.5/share/locale/de/LC_MESSAGES/libkonq.mo", R_OK) = 0 <0.000009>
11902 1184699865.209767 stat("/usr/kde/3.5/share/locale/de/LC_MESSAGES/libkonq.mo", {st_mode=S_IFREG|0644, st_size=5123, ...}) = 0 <0.000009>
11902 1184699865.209830 access("/home/markus/.kde3.5/share/locale/de/LC_MESSAGES/searchbarplugin.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.209874 access("/usr/kde/3.5/share/locale/de/LC_MESSAGES/searchbarplugin.mo", R_OK) = 0 <0.000009>
11902 1184699865.209913 stat("/usr/kde/3.5/share/locale/de/LC_MESSAGES/searchbarplugin.mo", {st_mode=S_IFREG|0644, st_size=995, ...}) = 0 <0.000009>
11902 1184699865.209976 access("/home/markus/.kde3.5/share/locale/de/LC_MESSAGES/kcmkurifilt.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.210019 access("/usr/kde/3.5/share/locale/de/LC_MESSAGES/kcmkurifilt.mo", R_OK) = 0 <0.000009>
11902 1184699865.210058 stat("/usr/kde/3.5/share/locale/de/LC_MESSAGES/kcmkurifilt.mo", {st_mode=S_IFREG|0644, st_size=9225, ...}) = 0 <0.000010>
11902 1184699865.210139 access("/home/markus/.kde3.5/share/locale/de/LC_MESSAGES/akregatorkonqfeedicon.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.210184 access("/usr/kde/3.5/share/locale/de/LC_MESSAGES/akregatorkonqfeedicon.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000010>
11902 1184699865.210227 access("/usr/share/locale/de/LC_MESSAGES/akregatorkonqfeedicon.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000010>
11902 1184699865.210277 access("/home/markus/.kde3.5/share/locale/de/LC_MESSAGES/autorefresh.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000008>
11902 1184699865.210319 access("/usr/kde/3.5/share/locale/de/LC_MESSAGES/autorefresh.mo", R_OK) = 0 <0.000010>
11902 1184699865.210357 stat("/usr/kde/3.5/share/locale/de/LC_MESSAGES/autorefresh.mo", {st_mode=S_IFREG|0644, st_size=1176, ...}) = 0 <0.000009>
11902 1184699865.210440 access("/home/markus/.kde3.5/share/locale/de/LC_MESSAGES/khtmlsettingsplugin.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.210484 access("/usr/kde/3.5/share/locale/de/LC_MESSAGES/khtmlsettingsplugin.mo", R_OK) = 0 <0.000010>
11902 1184699865.210524 stat("/usr/kde/3.5/share/locale/de/LC_MESSAGES/khtmlsettingsplugin.mo", {st_mode=S_IFREG|0644, st_size=1707, ...}) = 0 <0.000010>
11902 1184699865.210588 access("/home/markus/.kde3.5/share/locale/de/LC_MESSAGES/minitoolsplugin.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000009>
11902 1184699865.210631 access("/usr/kde/3.5/share/locale/de/LC_MESSAGES/minitoolsplugin.mo", R_OK) = 0 <0.000010>
11902 1184699865.210670 stat("/usr/kde/3.5/share/locale/de/LC_MESSAGES/minitoolsplugin.mo", {st_mode=S_IFREG|0644, st_size=558, ...}) = 0 <0.000010>
11902 1184699865.210748 access("/home/markus/.kde3.5/share/locale/de/LC_MESSAGES/uachangerplugin.mo", R_OK) = -1 ENOENT (No such file or directory) <0.000008>
11902 1184699865.210791 access("/usr/kde/3.5/share/locale/de/LC_MESSAGES/uachangerplugin.mo", R_OK) = 0 <0.000010>
11902 1184699865.210830 stat("/usr/kde/3.5/share/locale/de/LC_MESSAGES/uachangerplugin.mo", {st_mode=S_IFREG|0644, st_size=1212, ...}) = 0 <0.000009>
11902 1184699865.210893 open("/usr/kde/3.5/share/locale/de/LC_MESSAGES/kdelibs.mo", O_RDONLY) = 12 <0.000013>
11902 1184699865.210933 fstat(12, {st_mode=S_IFREG|0644, st_size=187022, ...}) = 0 <0.000006>
11902 1184699865.210972 mmap(NULL, 187022, PROT_READ, MAP_PRIVATE, 12, 0) = 0x2b00f9b39000 <0.000010>
11902 1184699865.211006 close(12) = 0 <0.000006>
11902 1184699865.211187 ioctl(5, FIONREAD, [1]) = 0 <0.000005>
11902 1184699865.211227 gettimeofday({1184699865, 211237}, NULL) = 0 <0.000006>
11902 1184699865.211351 ioctl(5, FIONREAD, [1]) = 0 <0.000005>
11902 1184699865.211389 gettimeofday({1184699865, 211399}, NULL) = 0 <0.000006>
11902 1184699865.211593 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.211631 gettimeofday({1184699865, 211642}, NULL) = 0 <0.000007>
11902 1184699865.212007 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.212047 gettimeofday({1184699865, 212057}, NULL) = 0 <0.000006>
11902 1184699865.215764 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.215811 gettimeofday({1184699865, 215821}, NULL) = 0 <0.000006>
11902 1184699865.216046 gettimeofday({1184699865, 216058}, NULL) = 0 <0.000007>
11902 1184699865.216659 gettimeofday({1184699865, 216673}, NULL) = 0 <0.000007>
11902 1184699865.216729 gettimeofday({1184699865, 216740}, NULL) = 0 <0.000007>
11902 1184699865.217458 ioctl(5, FIONREAD, [1]) = 0 <0.000009>
11902 1184699865.217576 gettimeofday({1184699865, 217588}, NULL) = 0 <0.000007>
11902 1184699865.217617 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.217649 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.217682 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.217714 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.217748 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.217788 gettimeofday({1184699865, 217798}, NULL) = 0 <0.000006>
11902 1184699865.217826 ioctl(5, FIONREAD, [1]) = 0 <0.000030>
11902 1184699865.217900 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.217934 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.218284 gettimeofday({1184699865, 218297}, NULL) = 0 <0.000006>
11902 1184699865.218390 ioctl(1, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig -icanon -echo ...}) = 0 <0.000010>
11902 1184699865.218445 write(2, "konqueror: Fatal IO error: client killed\n", 41) = 41 <0.000115>
11902 1184699865.219919 ioctl(5, FIONREAD, [1]) = 0 <0.000026>
11902 1184699865.220001 gettimeofday({1184699865, 220012}, NULL) = 0 <0.000006>
11902 1184699865.220093 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.220132 gettimeofday({1184699865, 220142}, NULL) = 0 <0.000007>
11902 1184699865.220206 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.220243 gettimeofday({1184699865, 220254}, NULL) = 0 <0.000007>
11902 1184699865.220322 ioctl(5, FIONREAD, [1]) = 0 <0.000006>
11902 1184699865.220359 gettimeofday({1184699865, 220370}, NULL) = 0 <0.000007>
11902 1184699865.220831 write(9, "\1\2\1\0\313\0\0\0\300\2\0\0", 12) = 12 <0.000084>
11902 1184699865.220975 write(9, "\0\0\0\20konqueror-11902\0\0\0\0\vDCOPServer\0\0\0\0\0\0\0\0?disconnectSignal(QCString,QCString,QCString,QCString,QCString)\0\0\0\0]", 110) = 110 <0.000011>
11902 1184699865.221025 write(9, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0IKBookmarkManager-/home/markus/.kde3.5/share/apps/konqueror/bookmarks.xml\0\0\0\0\0", 93) = 93 <0.000065>
11902 1184699865.221129 read(9, "\2\3\0\0021\0\0\0", 8) = 8 <0.000006>
11902 1184699865.221163 read(9, "\300\2\0\0", 4) = 4 <0.000015>
11902 1184699865.221204 read(9, "\0\0\0\vDCOPServer\0\0\0\0\20konqueror-11902\0\0\0\0\5bool\0\0\0\0\1\1", 49) = 49 <0.000023>
11902 1184699865.222389 munmap(0x2b00f9b39000, 187022) = 0 <0.000018>
11902 1184699865.223946 exit_group(1) = ?

Linus Torvalds

unread,
Jul 17, 2007, 4:40:08 PM7/17/07
to

On Tue, 17 Jul 2007, Ingo Molnar wrote:
>
> i think the problem starts here:
>
> 11902 1184699865.141939 read(3, "", 32) = 0 <0.000007>

Well, it's preceded by a poll() that says that it has a POLLHUP event, so
that socket would seem to have simply been closed from the other end.
There's also a huge amount of select() calls showing the same thing
(except since it's just the input side, you cannot tell that it's
POLLHUP).

Don't ask me *why*, though. It's preceded by

..


11902 1184699848.615201 read(3, 0x7fffb5b9c8b0, 32) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
11902 1184699848.615252 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, -1) = 1 <0.009307>
11902 1184699848.624614 read(3, "\1 \303!\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0b\340T\0\0\0\0\0", 32) = 32 <0.000011>

.. got data ..

11902 1184699848.624710 ioctl(3, FIONREAD, [0]) = 0 <0.000009>
11902 1184699848.624762 ioctl(3, FIONREAD, [0]) = 0 <0.000048>

.. ok, nothing more..

11902 1184699848.624866 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <16.495008>
11902 1184699865.119950 ioctl(3, FIONREAD, [0]) = 0 <0.000006>

16+ seconds pass, now it's marked as readable, but returns zero bytes of
data: the other end closed it.

Tons of unnecessary and stupid sequences of:

11902 1184699865.119988 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120031 ioctl(3, FIONREAD, [0]) = 0 <0.000005>

..

and then finally:

...


11902 1184699865.141809 poll([{fd=3, events=POLLIN, revents=POLLIN|POLLHUP}], 1, 0) = 1 <0.000005>
11902 1184699865.141838 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141939 read(3, "", 32) = 0 <0.000007>

ie now konqueror noticed that it was *really* closed, and read the EOF.

But why does that happen? And why would the scheduler have *anything* to
do with this? No idea. Maybe timing. Maybe the time.c changes. Dunno.

Linus

Ingo Molnar

unread,
Jul 17, 2007, 4:50:20 PM7/17/07
to

* Linus Torvalds <torv...@linux-foundation.org> wrote:

> But why does that happen? And why would the scheduler have *anything*
> to do with this? No idea. Maybe timing. Maybe the time.c changes.
> Dunno.

hm, Markus indicated that he tried the v2.6.21.6-cfsv19 patch, and that
does not include the time.c change. Markus - does your kernel include
the code below? (if yes, please revert it via patch -p1 -R )

Ingo

Index: linux/kernel/time.c
===================================================================
--- linux.orig/kernel/time.c
+++ linux/kernel/time.c
@@ -57,14 +57,17 @@ EXPORT_SYMBOL(sys_tz);
*/
asmlinkage long sys_time(time_t __user * tloc)
{
- time_t i;
- struct timeval tv;
+ /*
+ * We read xtime.tv_sec atomically - it's updated
+ * atomically by update_wall_time(), so no need to
+ * even read-lock the xtime seqlock:
+ */
+ time_t i = xtime.tv_sec;

- do_gettimeofday(&tv);
- i = tv.tv_sec;
+ smp_rmb(); /* sys_time() results are coherent */

if (tloc) {
- if (put_user(i,tloc))
+ if (put_user(i, tloc))
i = -EFAULT;
}
return i;
@@ -373,6 +376,20 @@ void do_gettimeofday (struct timeval *tv

tv->tv_sec = sec;
tv->tv_usec = usec;
+
+ /*
+ * Make sure xtime.tv_sec [returned by sys_time()] always
+ * follows the gettimeofday() result precisely. This
+ * condition is extremely unlikely, it can hit at most
+ * once per second:
+ */
+ if (unlikely(xtime.tv_sec != tv->tv_sec)) {
+ unsigned long flags;
+
+ write_seqlock_irqsave(&xtime_lock);
+ update_wall_time();
+ write_seqlock_irqrestore(&xtime_lock);
+ }
}

EXPORT_SYMBOL(do_gettimeofday);

David Schwartz

unread,
Jul 17, 2007, 5:20:08 PM7/17/07
to

> * Ian Kent <ra...@themaw.net> wrote:
>
> > Yes it does and I have two reported bugs so far.
> >
> > In several places I have code similar to:
> >
> > wait.tv_sec = time(NULL) + 1;
> > wait.tv_nsec = 0;
> >
> > signaled = 0;
> > while (!signaled) {
> > status = pthread_cond_timedwait(&cond, &mutex, &wait);
> > if (status) {
> > if (status == ETIMEDOUT)
> > break;
> > fatal(status);
> > }
> > }
>
> ah! It passes in a low-res time source into a high-res time interface
> (pthread_cond_timedwait()). Could you change the time(NULL) + 1 to
> time(NULL) + 2, or change it to:
>
> gettimeofday(&wait, NULL);
> wait.tv_sec++;
>
> does this solve the spinning?
>
> i'm wondering how widespread this is. If automount is the only app doing
> this then _maybe_ we could get away with it by changing automount?

This code is horribly broken. Don't change the kernel because this code is
broken.

First it adds a second, but then it subtracts up to a second. Just before
the second boundary, this code can burn CPU like crazy, with each wait being
just a few nanoseconds.

What is the intent of this code? Is it to wait "up to a second, possibly for
no time at all" or is to wait "for at least a second"? If so, why are you
zeroing the nanosecond count?

DS

Willy Tarreau

unread,
Jul 17, 2007, 5:50:07 PM7/17/07
to
Hi Ingo,

sorry for the long delay, I've spent a week doing non-kernel work.

On Tue, Jul 10, 2007 at 12:39:50AM +0200, Ingo Molnar wrote:
>
> * Willy Tarreau <w...@1wt.eu> wrote:
>
> > > The biggest user-visible change in -v19 is reworked sleeper
> > > fairness: it's similar in behavior to -v18 but works more
> > > consistently across nice levels. Fork-happy workloads (like kernel
> > > builds) should behave better as well. There are also a handful of
> > > speedups: unsigned math, 32-bit speedups, O(1) task pickup,
> > > debloating and other micro-optimizations.
> >
> > Interestingly, I also noticed the possibility of O(1) task pickup when
> > playing with v18, but did not detect any noticeable improvement with
> > it. Of course, it depends on the workload and I probably didn't
> > perform the most relevant tests.
>
> yeah - it's a small tweak. CFS is O(31) in sleep/wakeup so it's now all
> a big O(1) family again :)

Yes, that's what I tried to explain to a guy once : what I like with log(N)
algos is that even with N very large, log(N) is always small, and it's
sometimes faster to perform log(N) fast operations than 1 slow operation.
That's also why I don't care about balanced trees : my unbalanced trees may
hold 32 levels for 32 carefully chosen values, while balanced trees will
have 5 levels (worst difference between both). If I can insert and delete
a node 6 times faster, I always win. And quite frankly, I'm not interested
at the 32 entries case in a tree :-)

> > V19 works very well here on 2.6.20.14. I could start 32k busy loops at
> > nice +19 (I exhausted the 32k pids limit), and could still perform
> > normal operations. I noticed that 'vmstat' scans all the pid entries
> > under /proc, which takes ages to collect data before displaying a
> > line. Obviously, the system sometimes shows some short latencies, but
> > not much more than what you get from and SSH through a remote DSL
> > connection.
>
> great! I did not try to push it this far, yet.

Well, I borrowed two 1GB sticks because I discovered that one of my 512MB
had one defect bit. It was finally an opportunity for me to push the test
this far.

> > Here's a vmstat 1 output :
> >
> > r b w swpd free buff cache si so bi bo in cs us sy id
> > 32437 0 0 0 809724 488 6196 0 0 1 0 135 0 24 72 4
> > 32436 0 0 0 811336 488 6196 0 0 0 0 717 0 78 22 0
>
> crazy :-)

indeed :-)

> > Amusingly, I started mpg123 during this test and it skipped quite a
> > bit. After setting all tasks to SCHED_IDLE, it did not skip anymore.
> > All this seems to behave like one could expect.
>
> yeah. It behaves better than i expected in fact - 32K tasks is pushing
> things quite a bit. (we've got a 32K PID limit for example)

Yes, and in fact, I suspect that we still have an O(N) or O(N^2) pid
allocation algo somewhere (I did not look at the code), because forking
was very very slow when reaching those numbers. I'll possibly check this
when I have some spare time, because it reminds me a trivial source port
ring allocator I wrote a few years ago which was O(1). With 32k pids, it
will only require 64kB RAM for the whole system, and we may even optimize
it to spread CPUs entry points in order to nearly always avoid lock
contention.

> > I also started 30k processes distributed in 130 groups of 234 chained
> > by pipes in which one byte is passed. I get an average of 8000 in the
> > run queue. The context switch rate is very low and sometimes even null
> > in this test, maybe some of them are starving, I really do not know :
> >
> > r b w swpd free buff cache si so bi bo in cs us sy id
> > 7752 0 1 0 656892 244 4196 0 0 0 0 725 0 16 84 0
>
> hm, could you profile this? We could have some bottleneck somewhere
> (likely not in the scheduler) with that many tasks being runnable. [
> With CFS you can actually run a profiler under this workload ;-) ]

I may probably try some time later (not this week-end, I have some 2.4 to
work on).

> > In my tree, I have replaced the rbtree with the ebtree we talked
> > about, but it did not bring any performance boost because, eventhough
> > insert() and delete() are faster, the scheduler is already quite good
> > at avoiding them as much as possible, mostly relying on rb_next()
> > which has the same cost in both trees. All in all, the only variations
> > I noticed were caused by cacheline alignment when I tried to reorder
> > fields in the eb_node. So I will stop my experimentations here since I
> > don't see any more room for improvement.
>
> well, just a little bit of improvement would be nice to have too :)

Yes but I prefer to merge it where it really bring something (I'll have a
look at epoll, I noticed epollctl() was 30% slower under 2.6 with an rbtree
as it is under 2.4 with a hash). Then people will tell me "you're completely
dumb, you could have improved it that way!" and then, once it's optimized to
be always faster than the rbtree, we can switch CFS to it again ;-)

> Ingo

Cheers,
Willy

Markus

unread,
Jul 17, 2007, 6:10:10 PM7/17/07
to
> hm, Markus indicated that he tried the v2.6.21.6-cfsv19 patch, and
that
> does not include the time.c change. Markus - does your kernel include
> the code below? (if yes, please revert it via patch -p1 -R )
Well, the 2.6.22.1-cfs-v19 does include it, but the 2.6.21.6-cfs-v19
does not have that patch applied.
But both show this problem.

Markus

Bill Davidsen

unread,
Jul 17, 2007, 9:30:08 PM7/17/07
to
Ingo Molnar wrote:
> * Ian Kent <ra...@themaw.net> wrote:
>
>
>>> ah! It passes in a low-res time source into a high-res time interface
>>> (pthread_cond_timedwait()). Could you change the time(NULL) + 1 to
>>> time(NULL) + 2, or change it to:
>>>
>>> gettimeofday(&wait, NULL);
>>> wait.tv_sec++;
>>>
>> OK, I'm with you, hi-res timer.
>> But even so, how is the time in the past after adding a second.
>>
>> Is it because I'm not setting tv_nsec when it's close to a second
>> boundary, and hence your recommendation above?
>>
>
> yeah, it looks a bit suspicious: you create a +1 second timeout out of a
> 1 second resolution timesource. I dont yet understand the failure mode
> though that results in that looping and in the 30% CPU time use - do you
> understand it perhaps? (and automount is still functional while this is
> happening, correct?)
>

Can't say, I have automount running because I get it by default, but I
have nothing using at on my test machine. Why is it looping so fast when
there are no mount points defined? If the config changes there's no
requirement to notice right away, is there?

--
bill davidsen <davi...@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979

-

Ian Kent

unread,
Jul 18, 2007, 2:00:15 AM7/18/07
to
On Tue, 2007-07-17 at 14:16 -0700, David Schwartz wrote:
> > * Ian Kent <ra...@themaw.net> wrote:
> >
> > > Yes it does and I have two reported bugs so far.
> > >
> > > In several places I have code similar to:
> > >
> > > wait.tv_sec = time(NULL) + 1;
> > > wait.tv_nsec = 0;
> > >
> > > signaled = 0;
> > > while (!signaled) {
> > > status = pthread_cond_timedwait(&cond, &mutex, &wait);
> > > if (status) {
> > > if (status == ETIMEDOUT)
> > > break;
> > > fatal(status);
> > > }
> > > }
> >
> > ah! It passes in a low-res time source into a high-res time interface
> > (pthread_cond_timedwait()). Could you change the time(NULL) + 1 to
> > time(NULL) + 2, or change it to:
> >
> > gettimeofday(&wait, NULL);
> > wait.tv_sec++;
> >
> > does this solve the spinning?

Yes, adding in the offset within the current second appears to resolve
the issue. Thanks Ingo.

> >
> > i'm wondering how widespread this is. If automount is the only app doing
> > this then _maybe_ we could get away with it by changing automount?

I don't think the change is unreasonable since I wasn't using an
accurate time in the condition wait, so that's a coding mistake on my
part which I will fix.

Ian

Ian Kent

unread,
Jul 18, 2007, 2:50:08 AM7/18/07
to
On Tue, 2007-07-17 at 21:24 -0400, Bill Davidsen wrote:
> Ingo Molnar wrote:
> > * Ian Kent <ra...@themaw.net> wrote:
> >
> >
> >>> ah! It passes in a low-res time source into a high-res time interface
> >>> (pthread_cond_timedwait()). Could you change the time(NULL) + 1 to
> >>> time(NULL) + 2, or change it to:
> >>>
> >>> gettimeofday(&wait, NULL);
> >>> wait.tv_sec++;
> >>>
> >> OK, I'm with you, hi-res timer.
> >> But even so, how is the time in the past after adding a second.
> >>
> >> Is it because I'm not setting tv_nsec when it's close to a second
> >> boundary, and hence your recommendation above?
> >>
> >
> > yeah, it looks a bit suspicious: you create a +1 second timeout out of a
> > 1 second resolution timesource. I dont yet understand the failure mode
> > though that results in that looping and in the 30% CPU time use - do you
> > understand it perhaps? (and automount is still functional while this is
> > happening, correct?)
> >
>
> Can't say, I have automount running because I get it by default, but I
> have nothing using at on my test machine. Why is it looping so fast when
> there are no mount points defined? If the config changes there's no
> requirement to notice right away, is there?

There are two threads where this mistake is made.

One is used to trigger expire events for all automounted filesystems
which happen all the time since I need to run the expire to check if
anything is mounted and whether it needs to be umounted. The alarm
handler sleeps on a condition until the alarm list in not empty and then
sleeps on a condition until the next alarm in the list expires or an
alarm is added to the list, in which case it then checks the list again.
Since the autofs timeout granularity is one second this is a problem and
will be fixed. This isn't the source of the problem that's been
reported.

The second is the state queue handler which runs tasks such as expires,
map re-reads, shutdowns etc. for all automounted filesystems. While the
check interval could be longer it causes autofs to be slugish in
situations such as shutdowns where there are a largish number of mounts
present and I need to cancel such things as expires and the like. It's
possible I could improve this but, in fact, once the timespec is set
correctly as Ingo suggests it works fine and uses very little resource.

Ian

Ingo Molnar

unread,
Jul 18, 2007, 4:00:17 AM7/18/07
to

* Ian Kent <ra...@themaw.net> wrote:

> > > ah! It passes in a low-res time source into a high-res time
> > > interface (pthread_cond_timedwait()). Could you change the
> > > time(NULL) + 1 to time(NULL) + 2, or change it to:
> > >
> > > gettimeofday(&wait, NULL);
> > > wait.tv_sec++;
> > >
> > > does this solve the spinning?
>
> Yes, adding in the offset within the current second appears to resolve
> the issue. Thanks Ingo.
>
> > > i'm wondering how widespread this is. If automount is the only app
> > > doing this then _maybe_ we could get away with it by changing
> > > automount?
>
> I don't think the change is unreasonable since I wasn't using an
> accurate time in the condition wait, so that's a coding mistake on my
> part which I will fix.

thanks Ian for taking care of this and for fixing it!

Linus, Thomas, what do you think, should we keep the time.c change?
Automount is one app affected so far, and it's a borderline case: the
increased (30%) CPU usage is annoying, but it does not prevent the
system from working per se, and an upgrade to a fixed/enhanced automount
version resolves it.

The temptation of using a really (and trivially) scalable low-resolution
time-source (which is _easily_ vsyscall-able, on any platform) for DBMS
use is really large, to me at least. Should i perhaps add a boot/config
option that enables/disables this optimization, to allow distros finer
grained control about this? And we've also got to wait whether there's
any other app affected.

Ingo

Bill Davidsen

unread,
Jul 18, 2007, 10:00:19 AM7/18/07
to
Allow it to be selected by the "features" so that admins can evaluate
the implications without a reboot? That would be a convenient interface
if you could provide it.

--
bill davidsen <davi...@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979

-

Linus Torvalds

unread,
Jul 18, 2007, 12:10:10 PM7/18/07
to

On Tue, 17 Jul 2007, Ingo Molnar wrote:
>

> * Ian Kent <ra...@themaw.net> wrote:
> >
> > In several places I have code similar to:
> >
> > wait.tv_sec = time(NULL) + 1;
> > wait.tv_nsec = 0;

Ok, that definitely should work.

Does the patch below help?

> ah! It passes in a low-res time source into a high-res time interface
> (pthread_cond_timedwait()). Could you change the time(NULL) + 1 to
> time(NULL) + 2, or change it to:
>
> gettimeofday(&wait, NULL);
> wait.tv_sec++;

This is wrong. It's wrong for two reasons:

- it really shouldn't be needed. I don't think "time()" has to be
*exactly* in sync, but I don't think it can be off by a third of a
second or whatever (as the "30% CPU load" would seem to imply)

- gettimeofday works on a timeval, pthread_cond_timedwait() works on a
timespec.

So if it actually makes a difference, it makes a difference for the
*wrong* reason: the time is still totally nonsensical in the tv_nsec field
(because it actually got filled in with msecs!), but now the tv_sec field
is in sync, so it hides the bug.

Anyway, hopefully the patch below might help. But we probably should make
this whole thing a much more generic routine (ie we have our internal
"getnstimeofday()" that still is missing the second-overflow logic, and
that is quite possibly the one that triggers the "30% off" behaviour).

Ingo, I'd suggest:
- ger rid of "timespec_add_ns()", or at least make it return a return
value for when it overflows.
- make all the people who overflow into tv_sec call a "fix_up_seconds()"
thing that does the xtime overflow handling.

Linus

---
Subject: time: make sure sys_gettimeofday() and sys_time() are in sync
From: Ingo Molnar <mi...@elte.hu>

make sure sys_gettimeofday() and sys_time() results are coherent.

Signed-off-by: Ingo Molnar <mi...@elte.hu>
---
kernel/time/timekeeping.c | 13 +++++++++++++
1 file changed, 13 insertions(+)

Index: linux/kernel/time/timekeeping.c
===================================================================
--- linux.orig/kernel/time/timekeeping.c
+++ linux/kernel/time/timekeeping.c
@@ -92,6 +92,19 @@ static inline void __get_realtime_clock_
} while (read_seqretry(&xtime_lock, seq));

timespec_add_ns(ts, nsecs);


+ /*
+ * Make sure xtime.tv_sec [returned by sys_time()] always
+ * follows the gettimeofday() result precisely. This
+ * condition is extremely unlikely, it can hit at most
+ * once per second:
+ */

+ if (unlikely(xtime.tv_sec != ts->tv_sec)) {


+ unsigned long flags;
+

+ write_seqlock_irqsave(&xtime_lock, flags);
+ update_wall_time();
+ write_sequnlock_irqrestore(&xtime_lock, flags);
+ }
}

/**

Linus Torvalds

unread,
Jul 18, 2007, 1:30:14 PM7/18/07
to

On Wed, 18 Jul 2007, Ingo Molnar wrote:
>
> Linus, Thomas, what do you think, should we keep the time.c change?

No, not if it's off by the second field. That 30% CPU usage indicates that
there's some nasty bug there somewhere, and that's just not worth it.

If time() cannot get the second field right, it's bogus. I'm ok with us
not *guaranteeing* monotonicity of the second field when you compare
gettimeofday() with time(), but the 30% thing implies that it's much worse
than that, and that "time()" will likely report the previous second (when
compared to hrtimers) roughly a quarter of the time.

And that isn't acceptable.

So either it should be fixed, or reverted.

Linus

Ian Kent

unread,
Jul 18, 2007, 1:40:11 PM7/18/07
to
On Wed, 2007-07-18 at 09:03 -0700, Linus Torvalds wrote:
>
> On Tue, 17 Jul 2007, Ingo Molnar wrote:
> >
> > * Ian Kent <ra...@themaw.net> wrote:
> > >
> > > In several places I have code similar to:
> > >
> > > wait.tv_sec = time(NULL) + 1;
> > > wait.tv_nsec = 0;
>
> Ok, that definitely should work.
>
> Does the patch below help?
>
> > ah! It passes in a low-res time source into a high-res time interface
> > (pthread_cond_timedwait()). Could you change the time(NULL) + 1 to
> > time(NULL) + 2, or change it to:
> >
> > gettimeofday(&wait, NULL);
> > wait.tv_sec++;
>
> This is wrong. It's wrong for two reasons:
>
> - it really shouldn't be needed. I don't think "time()" has to be
> *exactly* in sync, but I don't think it can be off by a third of a
> second or whatever (as the "30% CPU load" would seem to imply)
>
> - gettimeofday works on a timeval, pthread_cond_timedwait() works on a
> timespec.
>
> So if it actually makes a difference, it makes a difference for the
> *wrong* reason: the time is still totally nonsensical in the tv_nsec field
> (because it actually got filled in with msecs!), but now the tv_sec field
> is in sync, so it hides the bug.

Oh ya .. I thought it wouldn't hurt to add the fraction of the current
second for correctness and actually put things like:

gettimeofday(&now, NULL);
wait.tv_sec = now.tv_sec + 1;
wait.tv_nsec = now.tv_usec * 1000;

in autofs.

Ian

Bill Davidsen

unread,
Jul 18, 2007, 5:40:10 PM7/18/07
to
Linus Torvalds wrote:
> On Tue, 17 Jul 2007, Ingo Molnar wrote:
>
>> * Ian Kent <ra...@themaw.net> wrote:
>>
>>> In several places I have code similar to:
>>>
>>> wait.tv_sec = time(NULL) + 1;
>>> wait.tv_nsec = 0;
>>>
>
> Ok, that definitely should work.
>
> Does the patch below help?
>
>
Spectacularly no! With this patch the "glitch1" script with multiple
scrolling windows has all xterms and glxgears stop totally dead for
~200ms once per second. I didn't properly test anything else after that.
Since the automount issue doesn't seem to start until something kicks it
off, I didn't see it but that doesn't mean it's fixed.

>> ah! It passes in a low-res time source into a high-res time interface
>> (pthread_cond_timedwait()). Could you change the time(NULL) + 1 to
>> time(NULL) + 2, or change it to:
>>
>> gettimeofday(&wait, NULL);
>> wait.tv_sec++;
>>
>
> This is wrong. It's wrong for two reasons:
>
> - it really shouldn't be needed. I don't think "time()" has to be
> *exactly* in sync, but I don't think it can be off by a third of a
> second or whatever (as the "30% CPU load" would seem to imply)
>
> - gettimeofday works on a timeval, pthread_cond_timedwait() works on a
> timespec.
>
> So if it actually makes a difference, it makes a difference for the
> *wrong* reason: the time is still totally nonsensical in the tv_nsec field
> (because it actually got filled in with msecs!), but now the tv_sec field
> is in sync, so it hides the bug.
>
> Anyway, hopefully the patch below might help. But we probably should make
> this whole thing a much more generic routine (ie we have our internal
> "getnstimeofday()" that still is missing the second-overflow logic, and
> that is quite possibly the one that triggers the "30% off" behaviour).
>
>
Hope that info helps.

> Ingo, I'd suggest:
> - ger rid of "timespec_add_ns()", or at least make it return a return
> value for when it overflows.
> - make all the people who overflow into tv_sec call a "fix_up_seconds()"
> thing that does the xtime overflow handling.
>
> Linus
>

-

Ingo Molnar

unread,
Jul 19, 2007, 5:10:11 AM7/19/07
to

* Bill Davidsen <davi...@tmr.com> wrote:

> > Does the patch below help?
>
> Spectacularly no! With this patch the "glitch1" script with multiple
> scrolling windows has all xterms and glxgears stop totally dead for
> ~200ms once per second. I didn't properly test anything else after
> that. Since the automount issue doesn't seem to start until something
> kicks it off, I didn't see it but that doesn't mean it's fixed.

thanks. Andrew also just reported that it broke his laptop and i'm
working on a proper version.

Ingo

Ingo Molnar

unread,
Jul 19, 2007, 5:30:18 AM7/19/07
to

* Linus Torvalds <torv...@linux-foundation.org> wrote:

> > ah! It passes in a low-res time source into a high-res time
> > interface (pthread_cond_timedwait()). Could you change the
> > time(NULL) + 1 to time(NULL) + 2, or change it to:
> >
> > gettimeofday(&wait, NULL);
> > wait.tv_sec++;
>
> This is wrong. It's wrong for two reasons:
>
> - it really shouldn't be needed. I don't think "time()" has to be
> *exactly* in sync, but I don't think it can be off by a third of a
> second or whatever (as the "30% CPU load" would seem to imply)
>
> - gettimeofday works on a timeval, pthread_cond_timedwait() works on a
> timespec.

ah, i didnt notice that automount mixed up timespec with timeval! That
is nasty and the tv_nsec field (which really is ts_usec to
pthread_cond_timewait()) must stay cleared - or rather, to avoid bugs of
this type, a timespec variable should be used for all this.

> So if it actually makes a difference, it makes a difference for the
> *wrong* reason: the time is still totally nonsensical in the tv_nsec
> field (because it actually got filled in with msecs!), but now the
> tv_sec field is in sync, so it hides the bug.
>
> Anyway, hopefully the patch below might help. But we probably should make
> this whole thing a much more generic routine (ie we have our internal
> "getnstimeofday()" that still is missing the second-overflow logic, and
> that is quite possibly the one that triggers the "30% off" behaviour).

yeah, i'll generalize it, but our internal getnstimeofday() used on most
architectures is using __get_realtime_clock_ns(), and the patch you
attached already adds the second-overflow logic to it.

there are two versions of getnstimeofday(), a TIME_INTERPOLATION one and
a !TIME_INTERPOLATION one. TIME_INTERPOLATION is only used on ia64 at
the moment - and that one indeed does not have the second overflow
logic.

> Ingo, I'd suggest:
> - ger rid of "timespec_add_ns()", or at least make it return a return
> value for when it overflows.
> - make all the people who overflow into tv_sec call a "fix_up_seconds()"
> thing that does the xtime overflow handling.

ok, i'll do something clean.

Ingo

Ingo Molnar

unread,
Jul 19, 2007, 10:40:08 AM7/19/07
to

* Bill Davidsen <davi...@tmr.com> wrote:

> > Does the patch below help?
>
> Spectacularly no! With this patch the "glitch1" script with multiple
> scrolling windows has all xterms and glxgears stop totally dead for
> ~200ms once per second. I didn't properly test anything else after
> that.

Bill, could you try the patch below - does it fix the automount problem,
without introducing new problems?

Ingo

--------------->
Subject: time: introduce xtime_seconds
From: Ingo Molnar <mi...@elte.hu>

introduce the xtime_seconds optimization. This is a read-mostly
low-resolution time source available to sys_time() and kernel-internal
use. This variable is kept uptodate atomically, and it's monotically
increased, every time some time interface constructs an xtime-alike time
result that overflows the seconds value. (it's updated from the timer
interrupt as well)

this way high-resolution time results update their seconds component at
the same time sys_time() does it:

1184858832999989000
1184858832000000000
1184858832999992000
1184858832000000000
1184858832999996000
1184858832000000000
1184858832999999000
1184858832000000000
1184858833000003000
1184858833000000000
1184858833000006000
1184858833000000000
1184858833000009000
1184858833000000000

[ these are nsec time results from alternating calls to sys_time() and
sys_gettimeofday(), recorded at the seconds boundary. ]

instead of the previous (non-coherent) behavior:

1184848950999987000
1184848950000000000
1184848950999990000
1184848950000000000
1184848950999994000
1184848950000000000
1184848950999997000
1184848950000000000
1184848951000001000
1184848950000000000
1184848951000005000
1184848950000000000
1184848951000008000
1184848950000000000
1184848951000011000
1184848950000000000
1184848951000015000

Signed-off-by: Ingo Molnar <mi...@elte.hu>
---

include/linux/time.h | 13 +++++++++++--
kernel/time.c | 25 ++++++-------------------
kernel/time/timekeeping.c | 28 ++++++++++++++++++++++++----
3 files changed, 41 insertions(+), 25 deletions(-)

Index: linux/include/linux/time.h
===================================================================
--- linux.orig/include/linux/time.h
+++ linux/include/linux/time.h
@@ -91,19 +91,28 @@ static inline struct timespec timespec_s
extern struct timespec xtime;
extern struct timespec wall_to_monotonic;
extern seqlock_t xtime_lock __attribute__((weak));
+extern unsigned long xtime_seconds;

extern unsigned long read_persistent_clock(void);
void timekeeping_init(void);

+extern void __update_xtime_seconds(unsigned long new_xtime_seconds);
+
+static inline void update_xtime_seconds(unsigned long new_xtime_seconds)
+{
+ if (unlikely((long)(new_xtime_seconds - xtime_seconds) > 0))
+ __update_xtime_seconds(new_xtime_seconds);
+}
+
static inline unsigned long get_seconds(void)
{
- return xtime.tv_sec;
+ return xtime_seconds;
}

struct timespec current_kernel_time(void);

#define CURRENT_TIME (current_kernel_time())
-#define CURRENT_TIME_SEC ((struct timespec) { xtime.tv_sec, 0 })
+#define CURRENT_TIME_SEC ((struct timespec) { xtime_seconds, 0 })

extern void do_gettimeofday(struct timeval *tv);
extern int do_settimeofday(struct timespec *tv);


Index: linux/kernel/time.c
===================================================================
--- linux.orig/kernel/time.c
+++ linux/kernel/time.c

@@ -58,11 +58,10 @@ EXPORT_SYMBOL(sys_tz);


asmlinkage long sys_time(time_t __user * tloc)
{

/*
- * We read xtime.tv_sec atomically - it's updated
- * atomically by update_wall_time(), so no need to
- * even read-lock the xtime seqlock:
+ * We read xtime_seconds atomically - it's updated
+ * atomically by update_xtime_seconds():
*/
- time_t i = xtime.tv_sec;
+ time_t i = xtime_seconds;



smp_rmb(); /* sys_time() results are coherent */

@@ -226,11 +225,11 @@ inline struct timespec current_kernel_ti

do {
seq = read_seqbegin(&xtime_lock);
-
+
now = xtime;
} while (read_seqretry(&xtime_lock, seq));

- return now;
+ return now;
}

EXPORT_SYMBOL(current_kernel_time);
@@ -377,19 +376,7 @@ void do_gettimeofday (struct timeval *tv


tv->tv_sec = sec;
tv->tv_usec = usec;

- /*
- * Make sure xtime.tv_sec [returned by sys_time()] always
- * follows the gettimeofday() result precisely. This
- * condition is extremely unlikely, it can hit at most
- * once per second:
- */
- if (unlikely(xtime.tv_sec != tv->tv_sec)) {
- unsigned long flags;
-
- write_seqlock_irqsave(&xtime_lock, flags);
- update_wall_time();
- write_sequnlock_irqrestore(&xtime_lock, flags);
- }
+ update_xtime_seconds(sec);
}
EXPORT_SYMBOL(do_gettimeofday);



Index: linux/kernel/time/timekeeping.c
===================================================================
--- linux.orig/kernel/time/timekeeping.c
+++ linux/kernel/time/timekeeping.c

@@ -45,14 +45,27 @@ EXPORT_SYMBOL(xtime_lock);
* used instead.
*/
struct timespec xtime __attribute__ ((aligned (16)));
-struct timespec wall_to_monotonic __attribute__ ((aligned (16)));
-static unsigned long total_sleep_time; /* seconds */
-
EXPORT_SYMBOL(xtime);

+struct timespec wall_to_monotonic __attribute__ ((aligned (16))) __read_mostly;
+static unsigned long total_sleep_time __read_mostly; /* seconds */
+
+unsigned long xtime_seconds __read_mostly;
+
+/* pointer to current clocksource: */
+static struct clocksource *clock __read_mostly;

-static struct clocksource *clock; /* pointer to current clocksource */
+/*
+ * Called when either xtime or any xtime-alike result back to
+ * user-space overflows the xtime_seconds field:
+ */
+void __update_xtime_seconds(unsigned long new_xtime_seconds)
+{
+ unsigned long old_xtime_seconds = xtime_seconds;

+ if ((long)(new_xtime_seconds - old_xtime_seconds) > 0)
+ cmpxchg(&xtime_seconds, old_xtime_seconds, new_xtime_seconds);
+}

#ifdef CONFIG_GENERIC_TIME
/**
@@ -100,6 +113,8 @@ static inline void __get_realtime_clock_


} while (read_seqretry(&xtime_lock, seq));

timespec_add_ns(ts, nsecs);
+

+ update_xtime_seconds(ts->tv_sec);
}

/**
@@ -256,6 +271,8 @@ void __init timekeeping_init(void)
clock->cycle_last = clocksource_read(clock);

xtime.tv_sec = sec;
+ update_xtime_seconds(sec);
+
xtime.tv_nsec = 0;
set_normalized_timespec(&wall_to_monotonic,
-xtime.tv_sec, -xtime.tv_nsec);
@@ -290,6 +307,8 @@ static int timekeeping_resume(struct sys
unsigned long sleep_length = now - timekeeping_suspend_time;

xtime.tv_sec += sleep_length;
+ update_xtime_seconds(xtime.tv_sec);
+
wall_to_monotonic.tv_sec -= sleep_length;
total_sleep_time += sleep_length;
}
@@ -464,6 +483,7 @@ void update_wall_time(void)
clock->xtime_nsec -= (u64)NSEC_PER_SEC << clock->shift;
xtime.tv_sec++;
second_overflow();
+ update_xtime_seconds(xtime.tv_sec);
}

/* interpolator bits */

Bill Davidsen

unread,
Jul 19, 2007, 1:10:12 PM7/19/07
to
Ingo Molnar wrote:
> * Bill Davidsen <davi...@tmr.com> wrote:
>
>>> Does the patch below help?

Doesn't seem to apply against 2.6.22.1, I'm trying 2.6.22.6 as soon as I
recreate it.

Ingo Molnar

unread,
Jul 19, 2007, 1:20:11 PM7/19/07
to

* Bill Davidsen <davi...@tmr.com> wrote:

> Ingo Molnar wrote:
> >* Bill Davidsen <davi...@tmr.com> wrote:
> >
> >>>Does the patch below help?
>
> Doesn't seem to apply against 2.6.22.1, I'm trying 2.6.22.6 as soon as
> I recreate it.

hm, it's against recent -git.

dont waste your time on 2.6.21.6-cfsv19, it will likely not apply - give
me a few minutes to create a patch for you against 2.6.22.1-cfsv19, ok?

Ingo

Ingo Molnar

unread,
Jul 19, 2007, 1:20:14 PM7/19/07
to

* Bill Davidsen <davi...@tmr.com> wrote:

> Ingo Molnar wrote:
> >* Bill Davidsen <davi...@tmr.com> wrote:
> >
> >>>Does the patch below help?
>
> Doesn't seem to apply against 2.6.22.1, I'm trying 2.6.22.6 as soon as
> I recreate it.

the patch below is merged against 2.6.22.1-cfs-v19 - does it solve the
autofs problem (without any other bad side-effects)?

Ingo

------------------->

kernel/time/timekeeping.c | 26 +++++++++++++++++++++++---
3 files changed, 40 insertions(+), 24 deletions(-)

Index: linux-cfs-2.6.22.q/include/linux/time.h
===================================================================
--- linux-cfs-2.6.22.q.orig/include/linux/time.h
+++ linux-cfs-2.6.22.q/include/linux/time.h


@@ -91,19 +91,28 @@ static inline struct timespec timespec_s
extern struct timespec xtime;
extern struct timespec wall_to_monotonic;
extern seqlock_t xtime_lock __attribute__((weak));
+extern unsigned long xtime_seconds;

extern unsigned long read_persistent_clock(void);
void timekeeping_init(void);

+extern void __update_xtime_seconds(unsigned long new_xtime_seconds);
+
+static inline void update_xtime_seconds(unsigned long new_xtime_seconds)
+{
+ if (unlikely((long)(new_xtime_seconds - xtime_seconds) > 0))
+ __update_xtime_seconds(new_xtime_seconds);
+}
+
static inline unsigned long get_seconds(void)
{
- return xtime.tv_sec;
+ return xtime_seconds;
}

struct timespec current_kernel_time(void);

#define CURRENT_TIME (current_kernel_time())
-#define CURRENT_TIME_SEC ((struct timespec) { xtime.tv_sec, 0 })
+#define CURRENT_TIME_SEC ((struct timespec) { xtime_seconds, 0 })

extern void do_gettimeofday(struct timeval *tv);
extern int do_settimeofday(struct timespec *tv);

Index: linux-cfs-2.6.22.q/kernel/time.c
===================================================================
--- linux-cfs-2.6.22.q.orig/kernel/time.c
+++ linux-cfs-2.6.22.q/kernel/time.c

- write_seqlock_irqsave(&xtime_lock);
- update_wall_time();
- write_seqlock_irqrestore(&xtime_lock);
- }
+ update_xtime_seconds(sec);
}

EXPORT_SYMBOL(do_gettimeofday);
Index: linux-cfs-2.6.22.q/kernel/time/timekeeping.c
===================================================================
--- linux-cfs-2.6.22.q.orig/kernel/time/timekeeping.c
+++ linux-cfs-2.6.22.q/kernel/time/timekeeping.c
@@ -38,13 +38,26 @@ EXPORT_SYMBOL(xtime_lock);
* the usual normalization.


*/
struct timespec xtime __attribute__ ((aligned (16)));
-struct timespec wall_to_monotonic __attribute__ ((aligned (16)));

-
EXPORT_SYMBOL(xtime);

+struct timespec wall_to_monotonic __attribute__ ((aligned (16))) __read_mostly;

+
+unsigned long xtime_seconds __read_mostly;
+
+/* pointer to current clocksource: */
+static struct clocksource *clock __read_mostly;

-static struct clocksource *clock; /* pointer to current clocksource */
+/*
+ * Called when either xtime or any xtime-alike result back to
+ * user-space overflows the xtime_seconds field:
+ */
+void __update_xtime_seconds(unsigned long new_xtime_seconds)
+{
+ unsigned long old_xtime_seconds = xtime_seconds;

+ if ((long)(new_xtime_seconds - old_xtime_seconds) > 0)
+ cmpxchg(&xtime_seconds, old_xtime_seconds, new_xtime_seconds);
+}

#ifdef CONFIG_GENERIC_TIME
/**

@@ -92,6 +105,8 @@ static inline void __get_realtime_clock_


} while (read_seqretry(&xtime_lock, seq));

timespec_add_ns(ts, nsecs);
+
+ update_xtime_seconds(ts->tv_sec);
}

/**

@@ -248,6 +263,8 @@ void __init timekeeping_init(void)


clock->cycle_last = clocksource_read(clock);

xtime.tv_sec = sec;
+ update_xtime_seconds(sec);
+
xtime.tv_nsec = 0;
set_normalized_timespec(&wall_to_monotonic,
-xtime.tv_sec, -xtime.tv_nsec);

@@ -281,6 +298,8 @@ static int timekeeping_resume(struct sys


unsigned long sleep_length = now - timekeeping_suspend_time;

xtime.tv_sec += sleep_length;
+ update_xtime_seconds(xtime.tv_sec);
+
wall_to_monotonic.tv_sec -= sleep_length;
}

/* re-base the last cycle value */
@@ -454,6 +473,7 @@ void update_wall_time(void)

Bill Davidsen

unread,
Jul 19, 2007, 1:30:12 PM7/19/07
to
Bill Davidsen wrote:
> Ingo Molnar wrote:
>> * Bill Davidsen <davi...@tmr.com> wrote:
>>
>>>> Does the patch below help?
>
> Doesn't seem to apply against 2.6.22.1, I'm trying 2.6.22.6 as soon as
> I recreate it.

Applied to 2.6.22-git9, building now.

Ingo Molnar

unread,
Jul 19, 2007, 1:50:07 PM7/19/07
to

* Bill Davidsen <davi...@tmr.com> wrote:

> Bill Davidsen wrote:
> >Ingo Molnar wrote:
> >>* Bill Davidsen <davi...@tmr.com> wrote:
> >>
> >>>>Does the patch below help?
> >
> >Doesn't seem to apply against 2.6.22.1, I'm trying 2.6.22.6 as soon as
> >I recreate it.
>
> Applied to 2.6.22-git9, building now.

ok, that's fine too.

Ingo

Bill Davidsen

unread,
Jul 19, 2007, 10:40:05 PM7/19/07
to
Ingo Molnar wrote:
> * Bill Davidsen <davi...@tmr.com> wrote:
>
>>> Does the patch below help?
>> Spectacularly no! With this patch the "glitch1" script with multiple
>> scrolling windows has all xterms and glxgears stop totally dead for
>> ~200ms once per second. I didn't properly test anything else after
>> that.
>
> Bill, could you try the patch below - does it fix the automount problem,
> without introducing new problems?
>
Okay, as noted off-list, after I exported the xtime_seconds it now
builds and works. However, there are a *lot* of "section mismatches"
which are not reassuring.

Boots, runs, glitch1 test runs reasonably smoothly. automount has not
used significant CPU yet, but I don't know what triggers it, the bad
behavior did not happen immediately without the patch. However, it looks
very hopeful.

Warnings attached to save you the trouble...

--
Bill Davidsen <davi...@tmr.com>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot

make01pb.des

Markus

unread,
Jul 20, 2007, 6:30:10 PM7/20/07
to
> hm, Markus indicated that he tried the v2.6.21.6-cfsv19 patch, and
that
> does not include the time.c change. Markus - does your kernel include
> the code below? (if yes, please revert it via patch -p1 -R )

As already said, 2.6.22.1-cfs-v19 includes the patch and
2.6.21.6-cfs-v19 does not include it.
But both suffer of the problem.

I now reversed the patch on 2.6.22.1-cfs-v19 but it does not help.
2.6.22-git15 is not working as well... so the problem did not magically
disappear like the processes.

Any further things I can do to track it down? (I go on vacation on
monday for two weeks, but just send them, I'll try them when I am back
or answer the mails, when I dont need to build a new kernel...)


Markus

Ingo Molnar

unread,
Jul 22, 2007, 8:10:10 AM7/22/07
to

* Markus <list...@web.de> wrote:

> I now reversed the patch on 2.6.22.1-cfs-v19 but it does not help.
> 2.6.22-git15 is not working as well... so the problem did not
> magically disappear like the processes.
>

> Any further things I can do to track it down? [...]

it _seems_ to be related to KDE apps most of the time, right? Linus
analyzed your strace and it suggests that X is dropping the connection.
I.e. it's user-space who gets confused for some reason - perhaps timing.

Would be nice if a KDE/konqueror person could look at your traces and
suggest a better debugging method to track this down - because right now
konqueror just vanishes with only the "konqueror: Fatal IO error: client
killed" message.

Hopefully i've Cc:-ed the right list: konqueror kfm-...@mail.kde.org.
I've attached below Linus' analysis about this mystery 'konqueror just
vanishes' problem, and the strace too. Could you perhaps mention the kde
and Xorg versions you are using?

Ingo

-------------------------------------------

.. got data ..

.. ok, nothing more..

and then finally:

But why does that happen? And why would the scheduler have *anything* to

do with this? No idea. Maybe timing. Maybe the time.c changes. Dunno.

Linus

-------------------------------------------

* Markus <list...@web.de> wrote:

> > i think konqueror tried to say something here about an image problem?

> Well, yes:
> In file kernel/qpixmap_x11.cpp, line 633: Out of memory
> QImage::convertDepth: Image is a null image
> In file kernel/qpixmap_x11.cpp, line 633: Out of memory
> QImage::convertDepth: Image is a null image
> In file kernel/qpixmap_x11.cpp, line 633: Out of memory
> QImage::convertDepth: Image is a null image
> In file kernel/qpixmap_x11.cpp, line 633: Out of memory
> QImage::convertDepth: Image is a null image
> konqueror: Fatal IO error: client killed

i think the problem starts here:

11902 1184699865.141939 read(3, "", 32) = 0 <0.000007>

file descriptor 3 is the Unix domain socket connection to X:

Ingo

11902 1184699848.615128 read(3, "\34a\302!\5\0\200\4a\1\0\0\333\227\10\0\0\352\323\0\0\0\0\0\347 C\0\0\0\0\0", 32) = 32 <0.000011>


11902 1184699848.615201 read(3, 0x7fffb5b9c8b0, 32) = -1 EAGAIN (Resource temporarily unavailable) <0.000009>
11902 1184699848.615252 poll([{fd=3, events=POLLIN, revents=POLLIN}], 1, -1) = 1 <0.009307>
11902 1184699848.624614 read(3, "\1 \303!\0\0\0\0\4\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0b\340T\0\0\0\0\0", 32) = 32 <0.000011>

11902 1184699848.624710 ioctl(3, FIONREAD, [0]) = 0 <0.000009>
11902 1184699848.624762 ioctl(3, FIONREAD, [0]) = 0 <0.000048>

11902 1184699848.624866 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <16.495008>
11902 1184699865.119950 ioctl(3, FIONREAD, [0]) = 0 <0.000006>

11902 1184699865.119988 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120031 ioctl(3, FIONREAD, [0]) = 0 <0.000005>

11902 1184699865.120065 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120106 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120140 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.120181 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120214 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120255 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.120289 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120330 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120364 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120405 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120438 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120479 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.120513 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120554 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.120587 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120628 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120662 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120703 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120736 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120777 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.120839 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120881 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120915 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.120956 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.120990 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000008>
11902 1184699865.121030 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121064 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121105 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121156 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121198 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121232 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121273 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.121307 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121348 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.121382 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121423 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
11902 1184699865.121504 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000008>
11902 1184699865.121546 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121584 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121626 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121659 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121704 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.121738 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121780 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.121813 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121855 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.121889 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.121930 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.121964 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122005 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122038 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122080 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122113 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122154 ioctl(3, FIONREAD, [0]) = 0 <0.000007>
11902 1184699865.122189 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122231 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122278 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000015>
11902 1184699865.122329 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122362 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.122403 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122437 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122478 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122512 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122553 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.122587 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122628 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.122662 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122703 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.122737 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122778 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122811 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122865 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.122900 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.122941 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.122975 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123031 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123065 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123106 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.123140 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123181 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123215 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123256 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.123290 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123331 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123365 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123406 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123440 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123481 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123515 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123556 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123589 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.123631 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.123665 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123705 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.123739 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123780 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.123814 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123855 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123889 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.123930 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.123964 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124005 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124060 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000008>
11902 1184699865.124102 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124135 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124177 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124211 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124255 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124289 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124330 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124363 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000008>
11902 1184699865.124405 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124439 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124479 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124513 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124554 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124588 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124629 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124662 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124703 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124737 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124778 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.124811 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124865 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124899 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.124941 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.124974 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125015 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125049 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125090 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125136 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000008>
11902 1184699865.125178 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.125212 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125254 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.125287 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125329 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125363 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125404 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.125440 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125482 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125516 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>


11902 1184699865.125557 ioctl(3, FIONREAD, [0]) = 0 <0.000005>

11902 1184699865.125591 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000044>
11902 1184699865.125706 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125740 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125781 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.125815 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125856 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125889 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.125930 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.125964 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126005 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126039 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126080 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126113 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.126154 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.126188 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126229 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.126263 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126304 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126338 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126379 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126413 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126454 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126488 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126529 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.126563 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126604 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126638 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126679 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.126712 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126753 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.126787 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126828 ioctl(3, FIONREAD, [0]) = 0 <0.000016>
11902 1184699865.126873 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126914 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.126948 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.126988 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.127039 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.127083 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.127117 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.127159 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.127193 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.127234 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.127267 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000011>
11902 1184699865.127325 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.127359 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000483>
11902 1184699865.127882 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.127917 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.127957 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.127990 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128030 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128063 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.128102 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128135 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128175 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128207 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128247 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128279 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128320 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128352 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.128391 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128423 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128463 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128495 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128535 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.128568 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128607 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128639 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.128679 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128711 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128751 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128783 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.128823 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128870 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.128910 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.128998 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129038 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129071 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129111 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129143 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129183 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129215 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129254 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129287 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129326 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.129359 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129399 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129432 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129471 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129503 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129543 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.129575 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129614 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129647 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129686 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129718 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129758 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.129790 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.129830 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.129876 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129915 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.129948 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.129987 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130021 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.130061 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130094 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130133 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130166 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.130206 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.130239 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130278 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130311 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.130351 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.130384 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.130423 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130456 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130496 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.130530 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130569 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130602 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130641 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130674 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.130714 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130747 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130786 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130819 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130870 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130902 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.130942 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.130974 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131014 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131046 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131087 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131120 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131160 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131192 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131232 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131264 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131304 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.131336 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131376 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.131408 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131448 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131480 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131519 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131552 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131591 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131624 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131663 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131696 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131735 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131768 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131807 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.131840 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.131892 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.131924 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.131964 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.131996 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.132036 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.132070 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132109 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132142 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.132182 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132215 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.132255 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.132298 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.132338 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132370 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132409 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132442 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132481 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.132513 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132552 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132585 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132624 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.132656 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.132696 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132728 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132768 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132801 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132840 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.132884 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132923 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.132956 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.132995 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.133027 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.133067 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133099 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133138 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133171 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.133210 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133242 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133281 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133333 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133374 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133406 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.133445 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133478 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133517 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133553 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.133592 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133624 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133663 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133696 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.133735 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133767 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133806 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133838 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133878 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.133923 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.133963 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.133995 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.134035 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.134067 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.134107 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134141 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134180 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134213 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134252 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134285 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134324 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.134356 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134396 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134428 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.134469 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.134502 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134541 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134574 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134613 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134646 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134685 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134718 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134757 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134790 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.134830 ioctl(3, FIONREAD, [0]) = 0 <0.000017>
11902 1184699865.134874 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000021>
11902 1184699865.134931 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.134964 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135004 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135036 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135075 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135108 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000010>
11902 1184699865.135151 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135184 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135223 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135256 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.135295 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135328 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135370 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135403 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135442 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135475 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135515 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.135547 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.135587 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135631 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.135671 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135703 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135742 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135775 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.135814 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.135846 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.135886 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.135918 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.135971 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136003 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.136043 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136075 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.136115 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136147 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136189 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136223 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136263 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136295 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136335 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136367 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.136407 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136439 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136479 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136511 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136551 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136583 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136623 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136655 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136694 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136727 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136766 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136799 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136838 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.136884 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136923 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.136956 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.136996 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137028 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137067 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137100 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137139 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137172 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137211 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137243 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137283 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137316 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137356 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137388 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137428 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137460 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.137500 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137532 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.137572 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137604 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.137644 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137676 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137716 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137748 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137787 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137820 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.137861 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.137893 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>


11902 1184699865.137932 ioctl(3, FIONREAD, [0]) = 0 <0.000006>

11902 1184699865.137979 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138018 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138051 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138090 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138122 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138161 ioctl(3, FIONREAD, [0]) = 0 <0.000006>
11902 1184699865.138193 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138232 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138266 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138322 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138355 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138394 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138426 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138465 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138498 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138537 ioctl(3, FIONREAD, [0]) = 0 <0.000008>
11902 1184699865.138572 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138612 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138644 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138683 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138715 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138754 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138786 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.138825 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.138871 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.138910 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141180 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.141222 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.141254 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.141292 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141324 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.141362 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141393 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.141432 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141463 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.141502 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141533 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.141572 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141603 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000007>
11902 1184699865.141641 ioctl(3, FIONREAD, [0]) = 0 <0.000004>
11902 1184699865.141673 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.141711 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141742 select(10, [3 4 5 7 9], [], [], NULL) = 1 (in [3]) <0.000006>
11902 1184699865.141781 ioctl(3, FIONREAD, [0]) = 0 <0.000005>


11902 1184699865.141809 poll([{fd=3, events=POLLIN, revents=POLLIN|POLLHUP}], 1, 0) = 1 <0.000005>
11902 1184699865.141838 ioctl(3, FIONREAD, [0]) = 0 <0.000005>
11902 1184699865.141939 read(3, "", 32) = 0 <0.000007>

Markus

unread,
Jul 22, 2007, 10:30:07 AM7/22/07
to
> it _seems_ to be related to KDE apps most of the time, right?
You are right (although some gtk-apps have the same problem), but that
may be caused by the fact that I dont _see_ other apps vanish... I'll
try some console-based apps...

> Could you perhaps mention the kde and Xorg versions you are using?

I have konqueror 3.5.5 and Xorg 7.2 on a native amd64 system (gcc 4.1.2
and glibc 2.5).


Markus

Markus

unread,
Aug 9, 2007, 1:40:11 PM8/9/07
to
Well, I am back now, but the problem still exists in 2.6.23-rc2.
And as there is nothing more I can do thats it for now.

Markus

Ingo Molnar

unread,
Aug 10, 2007, 3:50:08 AM8/10/07
to

* Markus <list...@web.de> wrote:

> Well, I am back now, but the problem still exists in 2.6.23-rc2. And
> as there is nothing more I can do thats it for now.

could you send me your debug-info:

http://people.redhat.com/mingo/cfs-scheduler/tools/cfs-debug-info.sh

just run that script on 2.6.23-rc2 system and send me the file it
produces. I've got a theory about what might be going on, and this
debug-info could help prove/disprove it.

Ingo

Markus

unread,
Aug 14, 2007, 1:20:10 PM8/14/07
to
> could you send me your debug-info:
>
> http://people.redhat.com/mingo/cfs-scheduler/tools/cfs-debug-info.sh
>
> just run that script on 2.6.23-rc2 system and send me the file it
> produces. I've got a theory about what might be going on, and this
> debug-info could help prove/disprove it.
Done by private mail on friday.

Also tried the very current linux-2.6.git (friday aswell) with
sched-ingo-combo.patch (as told in a private answer mail).

Nothing fixed it so far.

If I can do anything...

Markus

Markus

unread,
Oct 16, 2007, 8:10:10 PM10/16/07
to
Well, I am now running a 2.6.22 (without cfs) and could now see it once
(within a month...) that exactly the same message from konqueror was
produced.
So I think its a general problem of konqueror that was hidden and
somehow its triggered much more often with the cfs.

I just wonder why nobody else has this problem.

Markus

PS: I am currently building a 2.6.23.1...

0 new messages