Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

using select as a sleep call

1,231 views
Skip to first unread message

novic...@gmail.com

unread,
Feb 19, 2010, 6:29:21 PM2/19/10
to
Hello,

I was wondering if it is valid to use select as a sleep call. When i
use select and try to sleep, it seems the elapsed time is always 4
milliseconds at a minimum. I can not sleep for 1 millisecond only.
And, if i set the sleep to longer than 4 milliseconds the elapsed time
is also greater then the time i set.

Does anyone know why there would be any fixed overhead in using select
that would make it always 4 milliseconds?

My test program is attached.

Cheers,
Ivan


#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/time.h>
#include <time.h>
struct timespec diff_timespec(struct timespec start, struct timespec
end);
long long millisec_elapsed(struct timespec diff);

void test1(long microsec)
{
struct timeval delay;
delay.tv_sec = 0;
delay.tv_usec = 1;
(void) select(0, NULL, NULL, NULL, &delay);
}

void test2(long microsec)
{
struct timespec delay;
delay.tv_sec = 0;
delay.tv_nsec = 1000;
nanosleep(&delay, NULL);
}

int
main(int argc, char **argv)
{
struct timespec start;
struct timespec end;
struct timespec diff;
int i;

clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < 1000; ++i){
test1(1);
}
clock_gettime(CLOCK_MONOTONIC, &end);
diff = diff_timespec(start, end);
printf("operations took %lld milliseconds\n",
millisec_elapsed(diff));

clock_gettime(CLOCK_MONOTONIC, &start);
for (i = 0; i < 1000; ++i){
test2(1);
}
clock_gettime(CLOCK_MONOTONIC, &end);
diff = diff_timespec(start, end);
printf("operations took %lld milliseconds\n",
millisec_elapsed(diff));


return 0;
}

struct timespec diff_timespec(struct timespec start, struct timespec
end)
{
struct timespec result;

if (end.tv_nsec < start.tv_nsec){ // peform carry like in normal
subtraction
result.tv_nsec = 1000000000 + end.tv_nsec -
start.tv_nsec;
result.tv_sec = end.tv_sec - 1 - start.tv_sec;
}
else{
result.tv_nsec = end.tv_nsec - start.tv_nsec;
result.tv_sec = end.tv_sec - start.tv_sec;
}

return result;
}

long long millisec_elapsed(struct timespec diff){
return ((long long)diff.tv_sec * 1000) + (diff.tv_nsec / 1000000);
}


Nicolas George

unread,
Feb 19, 2010, 7:27:34 PM2/19/10
to
"novic...@gmail.com" wrote in message
<71344f54-8b08-4a9f...@u19g2000prh.googlegroups.com>:

> I was wondering if it is valid to use select as a sleep call. When i
> use select and try to sleep, it seems the elapsed time is always 4
> milliseconds at a minimum. I can not sleep for 1 millisecond only.
> And, if i set the sleep to longer than 4 milliseconds the elapsed time
> is also greater then the time i set.
>
> Does anyone know why there would be any fixed overhead in using select
> that would make it always 4 milliseconds?

A lot of schedulers use a fixed-interval timer interrupt to implement all
time-related scheduling. 4�ms means 250�Hz, which is a common value for the
timer interrupt on desktop setups.

You should be more specific about the exact OS you use, including the kernel
configuration.

Jens Thoms Toerring

unread,
Feb 19, 2010, 7:32:48 PM2/19/10
to
novic...@gmail.com <novic...@gmail.com> wrote:
> I was wondering if it is valid to use select as a sleep call.

Yes, that's one thing select() gets used for.

> When i use select and try to sleep, it seems the elapsed time is always
> 4 milliseconds at a minimum. I can not sleep for 1 millisecond only.
> And, if i set the sleep to longer than 4 milliseconds the elapsed time
> is also greater then the time i set.

> Does anyone know why there would be any fixed overhead in using select
> that would make it always 4 milliseconds?

You have to consider that you're using a multi-tasking system,
i.e. a system on which several processes run "in parallel" but
not really at the same time but with the system just making it
look like that by quickly switching between the different pro-
cesses. Thus when your process "runs" it just runs for a short
time, a so-called timeslice, then it gets suspended and some
other process is run, then your process may get run again for
a the duration of a timeslice, suspened again etc. until it's
finished.

Now when your process puts itself to sleep, e.g. by calling
select() with just a timeout or by calling usleep() etc., then
it tells the system: "I have nothing to do at the moment, you
may start another process while I'm waiting." And unless the
other process that then will get run also goes to sleep, your
process has to wait (at least) until the other process has used
up its timeslice. Thus, when you ask for your process to be put
to sleep then you can't expect that exactly after the time you
wanted it to sleep it will get rescheduled. The timeout you pass
to select() (or usleep() or similar functions) is thus only a
lower limit, i.e. your process won't be woken up before it's over
- but it can take a lot longer before your process is run again
than that.

Switching between processes takes time. If timeslices are very
short, a lot of the CPU time will be wasted just for that. Thus
the length of the timeslice is a compromise between not spending
too much time on task switching on the one hand and making it
look for the user as if all processes run at the same time on
the other. The 4 ms you have seen look like a reasonable value
for a timeslice - some years ago you normally would have had at
least 10 ms but with the newer, faster machines, timeslices of
4 ms or 1 ms get more and more common. On some systems the length
of the timeslice can be set when compiling the kernel (e.g. on
Linux you can select between 100 Hz, 250 Hz and 1 kHz). Going
beyond that is possible but would make the machine seem to be a
lot slower without any benefit for most users.

So, you have to accept that with all normal kinds of "sleeping"
you can only specify a lower bound for the time your process will
sleep, but there's no upper limit - the more processes are waiting
to be run the longer it may take (if you want to experiment try to
get your machine in a state where it's running out of memory and
starts to swap heavily and see how long those "sleeps" take then).

Regards, Jens
--
\ Jens Thoms Toerring ___ j...@toerring.de
\__________________________ http://toerring.de

novic...@gmail.com

unread,
Feb 19, 2010, 7:35:22 PM2/19/10
to
On Feb 19, 4:27 pm, Nicolas George <nicolas$geo...@salle-s.org> wrote:
> "novicki...@gmail.com"  wrote in message
>
> <71344f54-8b08-4a9f-a3dd-5870e22ac...@u19g2000prh.googlegroups.com>:

Ahhh ok. I am using Suse Enterprise Linux 11. I guess my box is set
to 250 Hz interval timer.

Thanks!

Cheers,
Ivan Novick

Ersek, Laszlo

unread,
Feb 19, 2010, 9:10:33 PM2/19/10
to

> I was wondering if it is valid to use select as a sleep call.

Yes, it is, the SUS explicitly specifies this behavior. Link to v2:

http://www.opengroup.org/onlinepubs/007908775/xsh/select.html

It has the benefit of not interfering with (some) other timers.


> When i
> use select and try to sleep, it seems the elapsed time is always 4
> milliseconds at a minimum.

This is also explicitly allowed by the standard -- search for the word
"granularity". (The specific reasons were explained by others.) The
descriptions of other interfaces use the word "resolution" instead.


> I can not sleep for 1 millisecond only.

http://kerneltrap.org/node/6750

That's an old article, but some parts of it should still be true.


http://www.ibm.com/developerworks/linux/library/l-cfs/#N10083

"For SCHED_RR and SCHED_FIFO policies, the real-time scheduling module
is used (that module is implemented in kernel/sched_rt.c)."


Or try to busy-wait in user-space if applicable.


Another way that might work is this: accept that you will be woken up
most of the time a bit late, and keep a running balance between where
you should be in time and where you actually are in time. If you're
late, work relentlessly. If you are early (have positive balance, ie.
surplus) then sleep off that surplus. That will most definitely push you
into deficit because of coarse timer resolution, so you'll work a bit
relentlessly afterwards. The greater the precision of your select(), the
smaller the amplitude of your deficit will be, and the shorter the "work
relentlessly" bursts will last.

Of course if your platform is incapable to cope, in the longer term,
with the event rate you have in mind, your deficit will accumulate
without bound.

...

My "udp_copy2" utility implements a packet scheduler that is
"mathematically precise" on the average, ie. it shouldn't drift in the
long term at all and enables a finely tunable packet rate.

http://freshmeat.net/projects/udp_copy2

Quoting from "timer_design.txt" -- take it for what it's worth:

----v----

N[0] N[1] N[2] N[3] N[4]
| | | | |
| |<--S[0]--->| |<-S[1]-->| |<-S[2]->| |<-S[3]-->| |
+--+-----------+--+---------+-+--------+-+---------+--+----
| | | | | | | | | |
| C[0] | C[1] | C[2] | C[3] | C[4]
| | | | |

Definitions:

N[I] := Nominal completion time of event #(I-1).
Also nominal start time of event #I.
For all I >= 0.

L[I] := N[I+1] - N[I]
Nominal length of event #I.
For all I >= 0.

C[I] := Real completion time of event #I.
For all I >= 0.
Let C[-1] := N[0].

S[I] := N[I+1] - C[I]
Amount of time to sleep after real completion of event #I
until nominal completion time of event #I.
For all I >= -1.

Thus:

1. S[-1] = N[0] - C[-1]
= N[0] - N[0] substituted definition
= 0.

2. For all I >= 0:
S[I] = S[I] - S[I-1] + S[I-1] introduced S[I-1]
= S[I-1] + (S[I] - S[I-1]) regrouped
= S[I-1] + (N[I+1] - C[I] - (N[I] - C[I-1])) subst. def.
= S[I-1] + (N[I+1] - N[I]) - (C[I] - C[I-1]) regrouped
= S[I-1] + (L[I] - (C[I] - C[I-1])) subst. def.

This means that the amount of time to sleep (S[I]) right after the real
completion of current event #I (C[I], "now")) can be determined using the
previous sleep length (S[I-1]), the nominal length of the current event
(L[I]), and the time passed by since the real completion time of the
previous event (C[I] - C[I-1]).

We can check that, for example, this yields for I=0:

S[0] = S[-1] + (L[0] - (C[0] - C[-1]))
= 0 + ((N[1] - N[0]) - (C[0] - N[0]))
= N[1] - N[0] - C[0] + N[0]
= N[1] - C[0]


In the algorithm below, for all I >= 0, exec_event(I) executes event #I and
reveals its nominal length L[I].

Algorithm:

C[-1] := current_time
S[-1] := 0
I := 0

LOOP forever
L[I] := exec_event(I)
C[I] := current_time
S[I] := S[I-1] + (L[I] - (C[I] - C[I-1]))
exec_sleep(S[I])
I := I + 1
END LOOP

[...]

the resoultion of the time line is 1/K microseconds

----^----

Cheers,
lacos

Rick Jones

unread,
Feb 19, 2010, 9:18:58 PM2/19/10
to
novic...@gmail.com <novic...@gmail.com> wrote:
> I was wondering if it is valid to use select as a sleep call.

"Back in the day" when the only choices were select() and sleep(),
given sleep() only slept for an integer number of seconds, those who
wanted a short sleep would use select(). And back then one often only
got 10 millisecond resolution - and liked it!-)

With the advent of usleep() and nanosleep() however many years ago
using select() for short sleeps has fallen out of favor.

rick jones
--
Process shall set you free from the need for rational thought.
these opinions are mine, all mine; HP might not want them anyway... :)
feel free to post, OR email to rick.jones2 in hp.com but NOT BOTH...

Ersek, Laszlo

unread,
Feb 19, 2010, 9:20:26 PM2/19/10
to

> I can not sleep for 1 millisecond only.

This may be useful too:

http://www.mjmwired.net/kernel/Documentation/hpet.txt

Cheers,
lacos

Alan Curry

unread,
Feb 19, 2010, 10:20:55 PM2/19/10
to
In article <852ae930-c0f1-4406...@b9g2000pri.googlegroups.com>,

novic...@gmail.com <novic...@gmail.com> wrote:
|
|Ahhh ok. I am using Suse Enterprise Linux 11. I guess my box is set
|to 250 Hz interval timer.

You can find out like this:

zcat /proc/config.gz | grep HZ

Unless someone's been dumb enough to disable /proc/config.gz

With CONFIG_NO_HZ=y you can get really short sleeps, as the kernel tries to
sleep the time actually requested, instead of rounding it up to the next tick
of a fixed-period timer. You probably won't get 1 usec (a million syscalls in
one second?!) but you may get better resolution than the 100Hz, 250Hz, or
1000Hz timers.

I got this from your program on a CONFIG_NO_HZ machine:

operations took 56 milliseconds
operations took 56 milliseconds

So it appears that each usleep took 56 usec, equivalent to over 17000Hz!

--
Alan Curry

Chris Friesen

unread,
Feb 20, 2010, 12:20:03 AM2/20/10
to
On 02/19/2010 05:29 PM, novic...@gmail.com wrote:

> Does anyone know why there would be any fixed overhead in using select
> that would make it always 4 milliseconds?

As others have said, HZ is probably 250.

You have a couple options:

1) nanosleep() might give finer granularity
2) you could set HZ to 1000
3) you could enable CONFIG_HIGH_RES_TIMERS

Chris

Nicolas George

unread,
Feb 20, 2010, 4:08:28 AM2/20/10
to
Your message explains very well how things work, but it has a technical
mistake I would like to correct for it to be completely accurate.

Jens Thoms Toerring wrote in message <7u8otg...@mid.uni-berlin.de>:


> Switching between processes takes time. If timeslices are very
> short, a lot of the CPU time will be wasted just for that. Thus
> the length of the timeslice is a compromise between not spending
> too much time on task switching on the one hand and making it
> look for the user as if all processes run at the same time on
> the other. The 4 ms you have seen look like a reasonable value
> for a timeslice - some years ago you normally would have had at
> least 10 ms but with the newer, faster machines, timeslices of
> 4 ms or 1 ms get more and more common. On some systems the length
> of the timeslice can be set when compiling the kernel (e.g. on
> Linux you can select between 100 Hz, 250 Hz and 1 kHz).

You are confusing timeslice and timer interrupt period.

There are two expensive operations: interrupting the process to jump to
kernel space at a timer interrupt is expensive, but scheduling another
process to run is even more expensive. This is especially related to memory
caches: if process B is scheduled to run, then all memory caches kept warm
by process A become useless, and B needs to load all its data from slow RAM.

The timer interrupt (when there is one) provides the kernel with the basic
time scale: not time-related scheduling can be done except at the precision
of the timer interrupt�. But most of the time, the scheduler will just do
nothing and jump back to the current process. For example, if there are two
process A and B requesting CPU time with the same priority, and A has been
running since the previous timer interrupt, the scheduler will probably let
A run for still four or nine timer interrupt periods before preempting it
and running B.

100 / 250 / 1000 Hz is the frequency of the timer interrupt. The duration of
the timeslice is not so simple to predict: modern schedulers use a variable
one. For example, the Linux kernel uses heuristics to distinguish
interactive process from computational process, and gives a smaller
timeslice to niced process.

You can see for yourself using the following perl snippet:

perl -MTime::HiRes="time,sleep" -e '$t=time;
while(1) { $d = time - $t; $t += $d; printf "%.10f\n", $d if $d > 1E-3 }'

Run it alone: it will probably print very few lines. Run it in parallel with
"dd if=/dev/urandom of=/dev/null" (a common way to waste CPU time; run one
per CPU/core if you have a SMP box), and you will see a lot of lines. On my
box, the average output is 22�ms, with a standard deviation of 10�ms. Which
means that the timeslice of the dd process is about 22�ms.

> Going
> beyond that is possible but would make the machine seem to be a
> lot slower without any benefit for most users.

Except for timerless schedulers. The principle is thus: unless some external
event occurs (a packet on the network card raises an interrupt, for
example), when the scheduler is about to run a process, it has exactly as
much information as what it will know when it is invoked by the next timer
interrupt.

So if the next timer interrupt decides "I will not change the running
process. Go.", the current call can predict "the next timer interrupt will
not change the running process". In fact, the current call can predict "I
will not change anything in the next 12�ms". If the hardware is flexible
enough, the scheduler can just disable the next 11 timer interrupts and save
some time and power.

If the hardware allows that and the scheduler is programmed to use it, then
it does not have to have a constant scheduling period: it can decide to
sleep for 42��s once and 420�ms later. It only needs to be careful not to
set up too small intervals too often.

Recent Linux kernels can be compiled with such a timerless scheduler. I do
not know if other mainline kernels have it.


1: Unless the hardware provides other programmable timed interrupt
generators. For example PCs have a RTC device that can generate periodic
interrupts, and the Linux kernel gives access to it through a device node. A
process can thus set it up to be woken at 8�kHz, and will actually run each
time provided its priority is high enough.

Jens Thoms Toerring

unread,
Feb 20, 2010, 6:11:03 PM2/20/10
to
Nicolas George <nicolas$geo...@salle-s.org> wrote:
> Your message explains very well how things work, but it has a technical
> mistake I would like to correct for it to be completely accurate.

> You are confusing timeslice and timer interrupt period.

[...all the good stuff snipped...]

Hi Nicolas,

thank you a lot for this very interesting and enlightening
correction!
Best regards, Jens

David Schwartz

unread,
Feb 21, 2010, 4:51:17 AM2/21/10
to
On Feb 19, 3:29 pm, "novicki...@gmail.com" <novicki...@gmail.com>
wrote:

> I can not sleep for 1 millisecond only.

In general, an ordinary, non-privileged process cannot sleep for 1
millisecond other than by just wasting 1 millisecond of CPU. The
problem is that this is too little time to sensibly give to another
process.

What is your outer problem? Odds are there's a sensible solution to
it.

DS

Rainer Weikusat

unread,
Feb 21, 2010, 4:05:21 PM2/21/10
to
Nicolas George <nicolas$geo...@salle-s.org> writes:

[...]

>> Going
>> beyond that is possible but would make the machine seem to be a
>> lot slower without any benefit for most users.
>
> Except for timerless schedulers.

The actual name of this feature is 'tickless', not 'timerless'.

Rainer Weikusat

unread,
Feb 21, 2010, 4:07:10 PM2/21/10
to
pac...@kosh.dhis.org (Alan Curry) writes:
> novic...@gmail.com <novic...@gmail.com> wrote:
> |
> |Ahhh ok. I am using Suse Enterprise Linux 11. I guess my box is set
> |to 250 Hz interval timer.
>
> You can find out like this:
>
> zcat /proc/config.gz | grep HZ
>
> Unless someone's been dumb enough to disable /proc/config.gz

It is pretty pointless to keep the .config-file used to compile a
kernel in kernel memory when real file on some persistent medium which
is not permanently loaded into RAM 'works' just as well.

David Schwartz

unread,
Feb 22, 2010, 3:39:00 AM2/22/10
to
On Feb 21, 1:07 pm, Rainer Weikusat <rweiku...@mssgmbh.com> wrote:

> > Unless someone's been dumb enough to disable /proc/config.gz

> It is pretty pointless to keep the .config-file used to compile a
> kernel in kernel memory when real file on some persistent medium which
> is not permanently loaded into RAM 'works' just as well.

It's saved my bacon more than once, FWIW.

DS

Jorge

unread,
Feb 22, 2010, 4:23:27 AM2/22/10
to
On Feb 21, 10:51 am, David Schwartz <dav...@webmaster.com> wrote:
> On Feb 19, 3:29 pm, "novicki...@gmail.com" <novicki...@gmail.com>
> wrote:
>
> > I can not sleep for 1 millisecond only.
>
> In general, an ordinary, non-privileged process cannot sleep for 1
> millisecond other than by just wasting 1 millisecond of CPU. The
> problem is that this is too little time to sensibly give to another
> process.

Are you sure ?

In a machine running @ 2.5GHz, 1ms accounts for 2.5 million cycles !
Add to that today's big CPU caches plus that there's usually a 2nd
core running in parallel and... ?
--
Jorge.

David Schwartz

unread,
Feb 22, 2010, 6:54:59 AM2/22/10
to
On Feb 22, 1:23 am, Jorge <jo...@jorgechamorro.com> wrote:

> > In general, an ordinary, non-privileged process cannot sleep for 1
> > millisecond other than by just wasting 1 millisecond of CPU. The
> > problem is that this is too little time to sensibly give to another
> > process.

> Are you sure ?

Yeah.

> In a machine running @ 2.5GHz, 1ms accounts for 2.5 million cycles !

Right, but the problem is that as machines have gotten faster, the
work we've wanted to do on them has gotten bigger as well. So the
"relative value" of 1 ms hasn't really changed all that much.

> Add to that today's big CPU caches

That makes things worse. It makes the penalty for switching from one
chunk of code to another much greater.

> plus that there's usually a 2nd
> core running in parallel and... ?

That increase the penalty on typical systems. When you switch
processes, you blow out all the caches. This means the CPU puts a
drain on RAM to refill the caches. Generally, the path to RAM is
shared by all cores, so this makes the cost of small timeslices
relatively higher, not lower.

DS

Jorge

unread,
Feb 22, 2010, 3:00:06 PM2/22/10
to
On Feb 22, 12:54 pm, David Schwartz <dav...@webmaster.com> wrote:
> On Feb 22, 1:23 am, Jorge <jo...@jorgechamorro.com> wrote:
>
> > > In general, an ordinary, non-privileged process cannot sleep for 1
> > > millisecond other than by just wasting 1 millisecond of CPU. The
> > > problem is that this is too little time to sensibly give to another
> > > process.
> > Are you sure ?
>
> Yeah.
>
> > In a machine running @ 2.5GHz, 1ms accounts for 2.5 million cycles !
>
> Right, but the problem is that as machines have gotten faster, the
> work we've wanted to do on them has gotten bigger as well. So the
> "relative value" of 1 ms hasn't really changed all that much.

I wouldn't say so. My current laptop is certainly much faster than the
previous one, in spite of the new -latest- unix OS that comes with it.

> > Add to that today's big CPU caches
>
> That makes things worse. It makes the penalty for switching from one
> chunk of code to another much greater.
>
> > plus that there's usually a 2nd
> > core running in parallel and... ?
>
> That increase the penalty on typical systems. When you switch
> processes, you blow out all the caches. This means the CPU puts a
> drain on RAM to refill the caches.

Then, maybe we ought to get rid of the caches... :-)

> Generally, the path to RAM is
> shared by all cores, so this makes the cost of small timeslices
> relatively higher, not lower.

That's true. And the cache is -often- shared too.
--
Jorge.

David Given

unread,
Feb 22, 2010, 4:25:45 PM2/22/10
to
On 22/02/10 20:00, Jorge wrote:
[...]

> I wouldn't say so. My current laptop is certainly much faster than the
> previous one, in spite of the new -latest- unix OS that comes with it.

I've got right here on my desk a set of Infomagic Linux CDs dating from
1995, with Slackware 2.2, Debian 0.91/3 and the Linux kernel version 1.2.

I keep meaning one day to install it on a modern machine just to watch
it fly. Alas, it won't have support for various bits of now-standard
hardware, like USB, which rather limits its usefulness.

Of course, modern systems have a kernel that is genuinely better
designed and faster, plus code that is better designed and compiled with
a vastly superior compiler, thus making it faster... but all at the
expense of being much, *much* bigger and resource hungry.

I remember the mental whiplash when I realised that the Enlightenment
window manager is now considered ultra-lightweight! And just think of
how Eight Megs And Constantly Swapping got its nickname...

--
┌─── dg@cowlark.com ───── http://www.cowlark.com ─────

│ 𝕻𝖍'𝖓𝖌𝖑𝖚𝖎 𝖒𝖌𝖑𝖜'𝖓𝖆𝖋𝖍 𝕮𝖙𝖍𝖚𝖑𝖍𝖚
│ 𝕽'𝖑𝖞𝖊𝖍 𝖜𝖌𝖆𝖍'𝖓𝖆𝖌𝖑 𝖋𝖍𝖙𝖆𝖌𝖓.

Rick Jones

unread,
Feb 22, 2010, 7:30:45 PM2/22/10
to
Jorge <jo...@jorgechamorro.com> wrote:

> On Feb 22, 12:54?pm, David Schwartz <dav...@webmaster.com> wrote:

> > Right, but the problem is that as machines have gotten faster, the
> > work we've wanted to do on them has gotten bigger as well. So the
> > "relative value" of 1 ms hasn't really changed all that much.

> I wouldn't say so. My current laptop is certainly much faster than
> the previous one, in spite of the new -latest- unix OS that comes
> with it.

Be patient - a few more releases and the underlying expectations for
available resources will make your current laptop seem slow again :)

rick jones
--
the road to hell is paved with business decisions...

David Schwartz

unread,
Feb 25, 2010, 9:44:30 PM2/25/10
to
On Feb 22, 12:00 pm, Jorge <jo...@jorgechamorro.com> wrote:

> > Right, but the problem is that as machines have gotten faster, the
> > work we've wanted to do on them has gotten bigger as well. So the
> > "relative value" of 1 ms hasn't really changed all that much.

> I wouldn't say so. My current laptop is certainly much faster than the
> previous one, in spite of the new -latest- unix OS that comes with it.

Why did you buy a faster laptop then? Weren't you perfectly happy with
the old one? Can't it still do everything it did back then just as
well?

Hint: The answer is that what you want to do has changed because it
can. And the value of 1ms of CPU time relative to what you want to do
has therefore not changed very much. The net effect is that long term
the relative value of 1ms of an average CPU has stayed about the same.
It's 1ms of average.

DS

0 new messages