Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Poor performance with 2.5.52, load and process in D state

0 views
Skip to first unread message

Paolo Ciarrocchi

unread,
Dec 22, 2002, 6:50:04 AM12/22/02
to
From: Andrew Morton <ak...@digeo.com>
> Paolo Ciarrocchi wrote:
> >
> > Hi all,
> > I booted 2.5.52 with the following parmater:
> > apm=off mem=32M (not sure about the amount, anyway I can reproduce
> > the problem for sure with 32M and 40M)
> >
> > Then I tried the osdb (www.osdb.org) benchmark with
> > 40M of data.
> >
> > $./bin/osdb-pg --nomulti
> >
> > the result is that aftwer a few second running top I see the postmaster
> > process in D state and a lot if iowait.
>
> What exactly _is_ the issue? The machine is achieving 25% CPU utilisation
> in user code, 6-9% in system code. It is doing a lot of I/O, and is
> getting work done.

Ok, I'm back with the results of the osdb test against 2.4.19 and 2.5.52
Both the kernel booted with apm=off mem=40M
osdb ran with 40M of data.
To summarize the results:
2.4.19 "Single User Test" 806.78 seconds (0:13:26.78)
2.5.52 "Single User Test" 3771.85 seconds (1:02:51.85)

Something strange is happening here.

Following the full logs of the test

"osdb"
"Invoked: ./bin/osdb-pg --nomulti --logfile /var/lib/pgsql/2.4.19-nomulti-40M.log"
create_tables() 0.10 seconds return value = 0
load() 34.59 seconds return value = 0
create_idx_uniques_key_bt() 24.31 seconds return value = 0
create_idx_updates_key_bt() 26.53 seconds return value = 0
create_idx_hundred_key_bt() 22.78 seconds return value = 0
create_idx_tenpct_key_bt() 24.08 seconds return value = 0
create_idx_tenpct_key_code_bt() 6.03 seconds return value = 0
create_idx_tiny_key_bt() 0.33 seconds return value = 0
create_idx_tenpct_int_bt() 2.97 seconds return value = 0
create_idx_tenpct_signed_bt() 2.44 seconds return value = 0
create_idx_uniques_code_h() 13.95 seconds return value = 0
create_idx_tenpct_double_bt() 3.78 seconds return value = 0
create_idx_updates_decim_bt() 7.74 seconds return value = 0
create_idx_tenpct_float_bt() 2.61 seconds return value = 0
create_idx_updates_int_bt() 4.37 seconds return value = 0
create_idx_tenpct_decim_bt() 4.84 seconds return value = 0
create_idx_hundred_code_h() 7.84 seconds return value = 0
create_idx_tenpct_name_h() 124.83 seconds return value = 0
create_idx_updates_code_h() 11.82 seconds return value = 0
create_idx_tenpct_code_h() 11.95 seconds return value = 0
create_idx_updates_double_bt() 3.99 seconds return value = 0
create_idx_hundred_foreign() 12.27 seconds return value = 0
populateDataBase() 355.06 seconds return value = 0

"Logical database size 40MB"

sel_1_cl() 0.24 seconds return value = 1
join_3_cl() 0.10 seconds return value = 0
sel_100_ncl() 0.33 seconds return value = 100
table_scan() 1.81 seconds return value = 0
agg_func() 16.48 seconds return value = 100
agg_scal() 1.51 seconds return value = 0
sel_100_cl() 1.17 seconds return value = 100
join_3_ncl() 14.47 seconds return value = 1
sel_10pct_ncl() 3.16 seconds return value = 10000
agg_simple_report() 66.67 seconds return value = 990009900
agg_info_retrieval() 0.26 seconds return value = 0
agg_create_view() 0.48 seconds return value = 0
agg_subtotal_report() 4.81 seconds return value = 1000
agg_total_report() 6.33 seconds return value = 932849
join_2_cl() 0.43 seconds return value = 0
join_2() 3.13 seconds return value = 1000
sel_variable_select_low() 1.78 seconds return value = 0
sel_variable_select_high() 3.25 seconds return value = 25000
join_4_cl() 0.04 seconds return value = 0
proj_100() 19.83 seconds return value = 10000
join_4_ncl() 21.71 seconds return value = 1
proj_10pct() 12.29 seconds return value = 100000
sel_1_ncl() 0.37 seconds return value = 1
join_2_ncl() 6.70 seconds return value = 1
integrity_test() 2.27 seconds return value = 0
drop_updates_keys() 0.30 seconds return value = 0
bulk_save() 0.16 seconds return value = 0
bulk_modify() 296.61 seconds return value = 0
upd_append_duplicate() 0.42 seconds return value = 0
upd_remove_duplicate() 0.00 seconds return value = 0
upd_app_t_mid() 0.00 seconds return value = 1
upd_mod_t_mid() 0.30 seconds return value = 0
upd_del_t_mid() 0.30 seconds return value = 0
upd_app_t_end() 0.03 seconds return value = 1
upd_mod_t_end() 0.30 seconds return value = 0
upd_del_t_end() 0.30 seconds return value = 0
create_idx_updates_code_h() 12.21 seconds return value = 0
upd_app_t_mid() 0.07 seconds return value = 1
upd_mod_t_cod() 0.03 seconds return value = 0
upd_del_t_mid() 0.67 seconds return value = 0
create_idx_updates_int_bt() 3.63 seconds return value = 0
upd_app_t_mid() 0.16 seconds return value = 1
upd_mod_t_int() 0.05 seconds return value = 0
upd_del_t_mid() 0.94 seconds return value = 0
bulk_append() 0.44 seconds return value = 0
bulk_delete() 299.05 seconds return value = 0

"Single User Test" 806.78 seconds (0:13:26.78)

"osdb"
"Invoked: ./bin/osdb-pg --nomulti --logfile /var/lib/pgsql/2.5.52-nomulti-40M.log"
create_tables() 0.06 seconds return value = 0
load() 36.89 seconds return value = 0
create_idx_uniques_key_bt() 1043.04 seconds return value = 0
create_idx_updates_key_bt() 808.48 seconds return value = 0
create_idx_hundred_key_bt() 749.90 seconds return value = 0
create_idx_tenpct_key_bt() 641.84 seconds return value = 0
create_idx_tenpct_key_code_bt() 6.11 seconds return value = 0
create_idx_tiny_key_bt() 0.06 seconds return value = 0
create_idx_tenpct_int_bt() 3.35 seconds return value = 0
create_idx_tenpct_signed_bt() 3.01 seconds return value = 0
create_idx_uniques_code_h() 18.30 seconds return value = 0
create_idx_tenpct_double_bt() 4.91 seconds return value = 0
create_idx_updates_decim_bt() 7.18 seconds return value = 0
create_idx_tenpct_float_bt() 3.91 seconds return value = 0
create_idx_updates_int_bt() 3.93 seconds return value = 0
create_idx_tenpct_decim_bt() 6.39 seconds return value = 0
create_idx_hundred_code_h() 8.03 seconds return value = 0
create_idx_tenpct_name_h() 130.66 seconds return value = 0
create_idx_updates_code_h() 13.18 seconds return value = 0
create_idx_tenpct_code_h() 12.96 seconds return value = 0
create_idx_updates_double_bt() 4.45 seconds return value = 0
create_idx_hundred_foreign() 11.29 seconds return value = 0
populateDataBase() 3518.83 seconds return value = 0

"Logical database size 40MB"

sel_1_cl() 0.28 seconds return value = 1
join_3_cl() 0.11 seconds return value = 0
sel_100_ncl() 2.25 seconds return value = 100
table_scan() 1.80 seconds return value = 0
agg_func() 15.74 seconds return value = 100
agg_scal() 1.66 seconds return value = 0
sel_100_cl() 1.79 seconds return value = 100
join_3_ncl() 23.20 seconds return value = 1
sel_10pct_ncl() 3.57 seconds return value = 10000
agg_simple_report() 69.20 seconds return value = 990009900
agg_info_retrieval() 0.44 seconds return value = 0
agg_create_view() 0.39 seconds return value = 0
agg_subtotal_report() 10.68 seconds return value = 1000
agg_total_report() 9.65 seconds return value = 932849
join_2_cl() 0.11 seconds return value = 0
join_2() 3.57 seconds return value = 1000
sel_variable_select_low() 1.55 seconds return value = 0
sel_variable_select_high() 3.81 seconds return value = 25000
join_4_cl() 0.07 seconds return value = 0
proj_100() 21.08 seconds return value = 10000
join_4_ncl() 36.16 seconds return value = 1
proj_10pct() 13.07 seconds return value = 100000
sel_1_ncl() 0.18 seconds return value = 1
join_2_ncl() 12.18 seconds return value = 1
integrity_test() 9.07 seconds return value = 0
drop_updates_keys() 0.63 seconds return value = 0
bulk_save() 0.69 seconds return value = 0
bulk_modify() 1923.65 seconds return value = 0
upd_append_duplicate() 0.30 seconds return value = 0
upd_remove_duplicate() 0.09 seconds return value = 0
upd_app_t_mid() 0.03 seconds return value = 1
upd_mod_t_mid() 2.04 seconds return value = 0
upd_del_t_mid() 1.67 seconds return value = 0
upd_app_t_end() 0.08 seconds return value = 1
upd_mod_t_end() 1.67 seconds return value = 0
upd_del_t_end() 1.78 seconds return value = 0
create_idx_updates_code_h() 13.74 seconds return value = 0
upd_app_t_mid() 0.12 seconds return value = 1
upd_mod_t_cod() 0.03 seconds return value = 0
upd_del_t_mid() 2.29 seconds return value = 0
create_idx_updates_int_bt() 4.06 seconds return value = 0
upd_app_t_mid() 0.15 seconds return value = 1
upd_mod_t_int() 0.01 seconds return value = 0
upd_del_t_mid() 2.13 seconds return value = 0
bulk_append() 2.59 seconds return value = 0
bulk_delete() 1570.86 seconds return value = 0

"Single User Test" 3771.85 seconds (1:02:51.85)


--
______________________________________________
http://www.linuxmail.org/
Now with POP3/IMAP access for only US$19.95/yr

Powered by Outblaze
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majo...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

Paolo Ciarrocchi

unread,
Dec 22, 2002, 12:20:05 PM12/22/02
to
From: "Paolo Ciarrocchi" <ciarr...@linuxmail.org>
[...]
Just to add the result of one more test:

> Ok, I'm back with the results of the osdb test against 2.4.19 and 2.5.52
> Both the kernel booted with apm=off mem=40M
> osdb ran with 40M of data.
> To summarize the results:
> 2.4.19 "Single User Test" 806.78 seconds (0:13:26.78)
> 2.5.52 "Single User Test" 3771.85 seconds (1:02:51.85)

2.4.19(mem=24M) "Single User Test" 3371.98 seconds (0:56:11.98)

Ciao,

Paolo

Andrew Morton

unread,
Dec 23, 2002, 6:40:06 AM12/23/02
to
Paolo Ciarrocchi wrote:
>
> From: Andrew Morton <ak...@digeo.com>
> > Paolo Ciarrocchi wrote:
> > >
> > > Hi all,
> > > I booted 2.5.52 with the following parmater:
> > > apm=off mem=32M (not sure about the amount, anyway I can reproduce
> > > the problem for sure with 32M and 40M)
> > >
> > > Then I tried the osdb (www.osdb.org) benchmark with
> > > 40M of data.
> > >
> > > $./bin/osdb-pg --nomulti
> > >
> > > the result is that aftwer a few second running top I see the postmaster
> > > process in D state and a lot if iowait.
> >
> > What exactly _is_ the issue? The machine is achieving 25% CPU utilisation
> > in user code, 6-9% in system code. It is doing a lot of I/O, and is
> > getting work done.
>
> Ok, I'm back with the results of the osdb test against 2.4.19 and 2.5.52
> Both the kernel booted with apm=off mem=40M
> osdb ran with 40M of data.
> To summarize the results:
> 2.4.19 "Single User Test" 806.78 seconds (0:13:26.78)
> 2.5.52 "Single User Test" 3771.85 seconds (1:02:51.85)
>

I could reproduce this.

What's happening is that when the test starts up it does a lot of writing
which causes 2.4 to do a bunch of swapout. So for the rest of the test
2.4 has an additional 8MB of cache available.

The problem of write activity causing swapout was fixed in 2.5. It
does not swap out at all in this test. But this time, we want it to.

End result: 2.4 has ~20 megabytes of cache for the test and 2.5 has ~12
megabytes. The working pagecache set is around 16 MB, so we're right on
the edge - it makes 2.5 run 10x slower. You can get most of this back by
boosting /proc/sys/vm/swappiness. I think the default of 60 is too unswappy
really. I run my machines at 80.

Tuning swappiness doesn't get all the performance back. 2.5's memory
footprint is generally larger - we still need to work that down.

If this was a real database server I'd expect that memory would end
up getting swapped out anyway. But it doesn't happen in this test,
which is actually quite light in its I/O demands.

With mem=128m, 2.5 is 10% faster than 2.4. Some of this is due to
the enhancements to copy_*_user() for poorly-aligned copies on Intel
CPUs.

Paolo Ciarrocchi

unread,
Dec 23, 2002, 8:00:07 AM12/23/02
to
From: Andrew Morton <ak...@digeo.com>
> Paolo Ciarrocchi wrote:
> >
> > From: Andrew Morton <ak...@digeo.com>
> > > Paolo Ciarrocchi wrote:
> > > >
> > > > Hi all,
> > > > I booted 2.5.52 with the following parmater:
> > > > apm=off mem=32M (not sure about the amount, anyway I can reproduce
> > > > the problem for sure with 32M and 40M)
> > > >
> > > > Then I tried the osdb (www.osdb.org) benchmark with
> > > > 40M of data.
> > > >
> > > > $./bin/osdb-pg --nomulti
> > > >
> > > > the result is that aftwer a few second running top I see the postmaster
> > > > process in D state and a lot if iowait.
> > >
> > > What exactly _is_ the issue? The machine is achieving 25% CPU utilisation
> > > in user code, 6-9% in system code. It is doing a lot of I/O, and is
> > > getting work done.
> >
> > Ok, I'm back with the results of the osdb test against 2.4.19 and 2.5.52
> > Both the kernel booted with apm=off mem=40M
> > osdb ran with 40M of data.
> > To summarize the results:
> > 2.4.19 "Single User Test" 806.78 seconds (0:13:26.78)
> > 2.5.52 "Single User Test" 3771.85 seconds (1:02:51.85)
> >
>
> I could reproduce this.

And this is good ;-)

> What's happening is that when the test starts up it does a lot of writing
> which causes 2.4 to do a bunch of swapout. So for the rest of the test
> 2.4 has an additional 8MB of cache available.
>
> The problem of write activity causing swapout was fixed in 2.5. It
> does not swap out at all in this test. But this time, we want it to.
>
> End result: 2.4 has ~20 megabytes of cache for the test and 2.5 has ~12
> megabytes. The working pagecache set is around 16 MB, so we're right on
> the edge - it makes 2.5 run 10x slower. You can get most of this back by
> boosting /proc/sys/vm/swappiness. I think the default of 60 is too unswappy
> really. I run my machines at 80.

Thank you for the clear explanation,
if you want I can run the test with different values of /proc/sys/vm/swappines
and post the results, let me know it it is a good idea or just a waste of time.


> Tuning swappiness doesn't get all the performance back. 2.5's memory
> footprint is generally larger - we still need to work that down.

Yes, it seems that 2.5 doesn/t fit very well on box with low memory.

> If this was a real database server I'd expect that memory would end
> up getting swapped out anyway. But it doesn't happen in this test,
> which is actually quite light in its I/O demands.

Indeed! I thought that booting the box with mem=40M was enought to
force the machine swapping. Is it this test good to "simulate" the
workload of a _real_ database ?



> With mem=128m, 2.5 is 10% faster than 2.4. Some of this is due to
> the enhancements to copy_*_user() for poorly-aligned copies on Intel
> CPUs.

Oh yes, I see it as well.

Thanks,
Paolo

--
______________________________________________
http://www.linuxmail.org/
Now with POP3/IMAP access for only US$19.95/yr

Powered by Outblaze

Denis Vlasenko

unread,
Dec 25, 2002, 3:40:05 AM12/25/02
to
> > Ok, I'm back with the results of the osdb test against 2.4.19 and
> > 2.5.52 Both the kernel booted with apm=off mem=40M
> > osdb ran with 40M of data.
> > To summarize the results:
> > 2.4.19 "Single User Test" 806.78 seconds (0:13:26.78)
> > 2.5.52 "Single User Test" 3771.85 seconds (1:02:51.85)
>
> I could reproduce this.
>
> What's happening is that when the test starts up it does a lot of
> writing which causes 2.4 to do a bunch of swapout. So for the rest
> of the test 2.4 has an additional 8MB of cache available.
>
> The problem of write activity causing swapout was fixed in 2.5. It
> does not swap out at all in this test. But this time, we want it to.
>
> End result: 2.4 has ~20 megabytes of cache for the test and 2.5 has
> ~12 megabytes. The working pagecache set is around 16 MB, so we're
> right on the edge - it makes 2.5 run 10x slower. You can get most of
> this back by boosting /proc/sys/vm/swappiness. I think the default
> of 60 is too unswappy really. I run my machines at 80.

Swap problem can be generalized. Let's consider these real-world
example pairs:

* cache/RAM,
* local node RAM/remote node RAM (in a NUMA box)
* RAM/disk
* local disk/networked storage
* RAM/fast solid storage "disk" (USB Flash or IDE built from DRAM)
(etc)

We have faster storage and slower storage.
We want to swap unused data from faster one for needed data
on slower one. The larger the speed gap between the two,
the more we feel the pain when our swap algorithms go wrong.

Note that speed differences (ratios) are very different:
from 1/5 to 1/1000000.

I wonder whether swappiness should be related to the speed
ratio between storage types? How exactly?

Having to play with tunables is not an ideal way to go,
I hate to think that Andrew's (and others) work can go down
the drain at the very next technology jump.
--
vda

Andrew Morton

unread,
Dec 25, 2002, 3:50:04 AM12/25/02
to
Denis Vlasenko wrote:
>
> ...

> I wonder whether swappiness should be related to the speed
> ratio between storage types? How exactly?

It doesn't matter. Assuming the latency of the swap device is the
same as the filesystem device it cancels out. We're simply trying
to minimise the total amount of I/O.



> Having to play with tunables is not an ideal way to go,
> I hate to think that Andrew's (and others) work can go down
> the drain at the very next technology jump.

It's constant-access-time mass storage technology which will toss 20
years development down the gurgler. Top to bottom, everything is
designed to support the locality=bandwidth characteristics of spinning
disks.

Paolo Ciarrocchi

unread,
Dec 25, 2002, 8:30:11 PM12/25/02
to
Hi Andrew/Rik/Con/all

Andrew, I promised you to run a few tests
using osdb (www.osdb.org with 40M of data)against both
2.4.19 and 2.5.52 booting the kernel with the
mem=XXM paramter.

I also played with the /proc/sys/vm/swappiness
parameter, I've ran all the tests with the standard
swappiness value (60), with 80 and 100.

100 means the 2.4 behaviour, isn't it ?

Looking at the results it seems that the "standard"
value is too low, probably 80 is the best one.
What do you think ?

There is no a big difference in the Time results
between 80 and 100 but looking at the top output
while I was running the tests I saw a big difference
in the swap usage.

Con, could you please run the contest test against
2.5.52 or .53 platying with the swappines parameter ?

Below the results of my test, please let me know if
you want I run more tests or if you need more information.

Ciao,
Paolo

-- By Memory Size --
Kernel Memory Swappiness Time
2.4.19 24 x 3371.98 seconds (0:56:11.98)
2.5.52 24 60 4585.03 seconds (1:16:25.03)
2.5.52 24 80 4285.98 seconds (1:11:25.98)
2.5.52 24 100 3633.64 seconds (1:00:33.64)

2.4.19 40 x 809.39 seconds (0:13:29.39)
2.5.52 40 60 3771.85 seconds (1:02:51.85)
2.5.52 40 80 3342.82 seconds (0:55:42.82)
2.5.52 40 100 855.22 seconds (0:14:15.22)

2.4.19 64 x 796.03 seconds (0:13:16.03)
2.5.52 64 60 840.41 seconds (0:14:00.41)
2.5.52 64 80 828.59 seconds (0:13:48.59)
2.5.52 64 100 833.92 seconds (0:13:53.92)

2.4.19 80 x 790.80 seconds (0:13:10.80)
2.5.52 80 60 788.65 seconds (0:13:08.65)
2.5.52 80 80 790.54 seconds (0:13:10.54)
2.5.52 80 100 793.79 seconds (0:13:13.79)

2.5.52 96 60 779.54 seconds (0:12:59.54)
2.5.52 96 80 782.86 seconds (0:13:02.86)
2.5.52 96 100 778.81 seconds (0:12:58.81)

2.4.19 all x 778.65 seconds (0:12:58.65)
2.5.52 all 60 768.98 seconds (0:12:48.98)
2.5.52 all 80 770.43 seconds (0:12:50.43)
2.5.52 all 100 771.76 seconds (0:12:51.76)

-- By kernel version --
2.4.19 24 x 3371.98 seconds (0:56:11.98)
2.4.19 40 x 809.39 seconds (0:13:29.39)
2.4.19 64 x 796.03 seconds (0:13:16.03)
2.4.19 80 x 790.80 seconds (0:13:10.80)
2.4.19 all x 778.65 seconds (0:12:58.65)

2.5.52 24 60 4585.03 seconds (1:16:25.03)
2.5.52 40 60 3771.85 seconds (1:02:51.85)
2.5.52 64 60 840.41 seconds (0:14:00.41)
2.5.52 80 60 788.65 seconds (0:13:08.65)
2.5.52 96 60 779.54 seconds (0:12:59.54)
2.5.52 all 60 768.98 seconds (0:12:48.98)

2.5.52 24 80 4285.98 seconds (1:11:25.98)
2.5.52 40 80 3342.82 seconds (0:55:42.82)
2.5.52 64 80 828.59 seconds (0:13:48.59)
2.5.52 80 80 790.54 seconds (0:13:10.54)
2.5.52 96 80 782.86 seconds (0:13:02.86)
2.5.52 all 80 770.43 seconds (0:12:50.43)

2.5.52 24 100 3633.64 seconds (1:00:33.64)
2.5.52 40 100 855.22 seconds (0:14:15.22)
2.5.52 64 100 833.92 seconds (0:13:53.92)
2.5.52 80 100 793.79 seconds (0:13:13.79)
2.5.52 96 100 778.81 seconds (0:12:58.81)
2.5.52 all 100 771.76 seconds (0:12:51.76)

--
______________________________________________
http://www.linuxmail.org/
Now with POP3/IMAP access for only US$19.95/yr

Powered by Outblaze

Andrew Morton

unread,
Dec 26, 2002, 2:50:05 AM12/26/02
to
Paolo Ciarrocchi wrote:
>
> Hi Andrew/Rik/Con/all
>
> Andrew, I promised you to run a few tests
> using osdb (www.osdb.org with 40M of data)against both
> 2.4.19 and 2.5.52 booting the kernel with the
> mem=XXM paramter.
>
> I also played with the /proc/sys/vm/swappiness
> parameter, I've ran all the tests with the standard
> swappiness value (60), with 80 and 100.
>
> 100 means the 2.4 behaviour, isn't it ?

Not really. swappiness=100 is strict LRU, treating pagecache and
mapped-into-process-memory pages identically. Smaller values will
make the kernel prefer to preserve mapped-into-process-memory.

> Looking at the results it seems that the "standard"
> value is too low, probably 80 is the best one.
> What do you think ?

I would agree with that.

> ...


>
> 2.4.19 all x 778.65 seconds (0:12:58.65)
> 2.5.52 all 60 768.98 seconds (0:12:48.98)
> 2.5.52 all 80 770.43 seconds (0:12:50.43)
> 2.5.52 all 100 771.76 seconds (0:12:51.76)

Only 1% difference. On my 4xPIII with mem=128M, 2.4.20-pre2 took
1080.55 seconds and 2.5.52-mm3 took 991.03. That's 9% faster, and
from the profile:

c010a858 system_call 192 4.3636
c011e518 current_kernel_time 201 3.3500
c012cdbc __generic_file_aio_read 214 0.4652
c012bba0 kallsyms_lookup 219 0.8295
c012ccec file_read_actor 230 1.1058
c0145abc fget 318 4.1842
c01d3ed4 radix_tree_lookup 384 3.8400
c0144be0 vfs_read 409 1.3279
c01315f4 check_poison_obj 695 7.8977
c012c964 do_generic_mapping_read 1007 1.1988
c01d7ae0 __copy_user_intel 34130 213.3125
c0108a58 poll_idle 299231 3562.2738

it appears that this benefit came from the special usercopy code.
What sort of CPU are you using?

Paolo Ciarrocchi

unread,
Dec 26, 2002, 4:30:11 AM12/26/02
to
From: Andrew Morton <ak...@digeo.com>

It is a PIII@800.

Ciao,
Paolo

--
______________________________________________
http://www.linuxmail.org/
Now with POP3/IMAP access for only US$19.95/yr

Powered by Outblaze

Andrew Morton

unread,
Dec 26, 2002, 4:40:07 AM12/26/02
to
Paolo Ciarrocchi wrote:
>
> > it appears that this benefit came from the special usercopy code.
> > What sort of CPU are you using?
>
> It is a PIII@800.

hm, don't know. I built the latest postgres locally. Perhaps the
alignment of some application buffer is different.

Paolo Ciarrocchi

unread,
Dec 26, 2002, 4:50:08 AM12/26/02
to
From: Andrew Morton <ak...@digeo.com>

> Paolo Ciarrocchi wrote:
> >
> > > it appears that this benefit came from the special usercopy code.
> > > What sort of CPU are you using?
> >
> > It is a PIII@800.
>
> hm, don't know. I built the latest postgres locally. Perhaps the
> alignment of some application buffer is different.

I don't know.
I've built the osdb test while I've just installed postgres
from the Mandrake 9 standard installation (so i586.rpm).

May be we are using different version of postgres...

Anyway, my test show that there is a lack of performance
in 2.5.* is the kernel fits in a machine with low memory,
any hint ?

And probably the "standard" swappiness value is not the
optimal, I'd like to see a few tests with the contest
tool ;-)


Ciao,
Paolo
--
______________________________________________
http://www.linuxmail.org/
Now with POP3/IMAP access for only US$19.95/yr

Powered by Outblaze

0 new messages