I recently got a system with the following specs:
* Intel Quad Core 2.5Ghz with 1333Mhz FSB 6MB L2 cache
* 4GB DDR2 memory
* 320GB Seagate Barracuda SATA 7200rpm hard disk
* Geforce 8400gs graphics card (fanless) (I installed the most recent
driver)
I installed CentOS 5.2 (kernel 2.6.18-92.1.17.el5) on this machine. I
expected the system to be faster compared to my another machine (Intel
P4 CPU 2.53 GHz, 1GB RAM) assuming it makes effective use of 4 cores.
However, the system is slow while running any application. When I
check the processor usage in the system monitor, it shows only one
core being used most of the time. Load on other cores is 1-3% during
this time.
Any idea what I should check on my system and how can I make it
faster? It takes lot of time even in booting up and executing simple
commands on the terminal.
Sincerely,
Kumar Vijay Mishra.
> However, the system is slow while running any application. When I
> check the processor usage in the system monitor, it shows only one
> core being used most of the time. Load on other cores is 1-3% during
> this time.
If you are only using one application, then multiple cores won't help
too much. Most applicatiyons need to be re-written to use multiple
cores (i.e. Photoshop, etc.)
Cat /proc/cpuinfo and count the number of cores seen by the OS. From the
above, it looks as though you've installed a single-core version of
the kernel. Traditionally, SMP (multi-core) versions of the kernel are
tagged with 'smp', as in: "2.6.9-67.0.22.ELsmp".
--
Steve Wampler -- swam...@noao.edu
The gods that smiled on your birth are now laughing out loud.
Nevermind - a reread of your post shows you know it is running SMP...
To follow up a bit more. The command:
-> grep processor /proc/cpuinfo
processor : 0
processor : 1
processor : 2
processor : 3
processor : 4
processor : 5
processor : 6
processor : 7
will show you the cores seen by the kernel. And on CentOS 5, the command
uname -a
will tell you if the kernel is multi-core aware as (e.g):
->uname -a
Linux presto.tuc.noao.edu 2.6.18-92.1.13.el5 #1 SMP Wed Sep 24 19:32:05 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux
>
> viz wrote:
> > I installed CentOS 5.2 (kernel 2.6.18-92.1.17.el5) on this machine. I
> > expected the system to be faster compared to my another machine (Intel
> > P4 CPU 2.53 GHz, 1GB RAM) assuming it makes effective use of 4 cores.
> >
> > However, the system is slow while running any application. When I
> > check the processor usage in the system monitor, it shows only one
> > core being used most of the time. Load on other cores is 1-3% during
> > this time.
>
> Cat /proc/cpuinfo and count the number of cores seen by the OS. From the
> above, it looks as though you've installed a single-core version of
> the kernel. Traditionally, SMP (multi-core) versions of the kernel are
> tagged with 'smp', as in: "2.6.9-67.0.22.ELsmp".
Note: Most (all?) common desktop applications are single-threaded, and
most would NOT benifit from being rewritten to be multi-threaded. A
multi-core / multi-processor system is a waste of effort for a *Desktop*
machine, unless the machine is making use of specialized applications
that really make use of SMP such as programs like Matlab or various other
number crunching and/or similar scientific applications.
The only other point of a desktop system with multiple cores / multiple
processors would be if one was running something like multiple
instances of SETI-at-home or something thing similar. OR if the
desktop system is also being used as a server of some sort (mail, file,
IntraNet, DHCP, proxy, etc.) such as for a small home [office] LAN.
Also: the amount of RAM and the speed of the disk can have a large
effect on percieved computer speed. Often these factors are more
significant then most people realize. Throwing more cores/processors
into a memory starved system (or a system with a slow disk), will NOT
make it faster.
>
--
Robert Heller -- Get the Deepwoods Software FireFox Toolbar!
Deepwoods Software -- Linux Installation and Administration
http://www.deepsoft.com/ -- Web Hosting, with CGI and Database
hel...@deepsoft.com -- Contract Programming: C/C++, Tcl/Tk
I do not know if mine is a "desktop" or not; it is on the top of my desk.
One application I run is postgreSQL dbms. While most of its processing uses
a single processor, its logger, its memory cleanup, and such do use the
other processors, so there is a benefit to multiple cores.
When I ran IBM's DB2 dbms, they ran a disk write process for each hard
drive, so that that could go in parallel with the other stuff.
>
> The only other point of a desktop system with multiple cores / multiple
> processors would be if one was running something like multiple instances
> of SETI-at-home or something thing similar. OR if the desktop system is
> also being used as a server of some sort (mail, file, IntraNet, DHCP,
> proxy, etc.) such as for a small home [office] LAN.
Depending on what the machine is doing, you can get some benefit from more
than one core for ordinary desktop usage. Even if the user's process takes
100% of a CPU, the system and any other daemons, can run on another.
I happen to run 4 BOINC processes to use up the waste processor time.
>
> Also: the amount of RAM and the speed of the disk can have a large effect
> on percieved computer speed. Often these factors are more significant
> then most people realize. Throwing more cores/processors into a memory
> starved system (or a system with a slow disk), will NOT make it faster.
>
Absolutely. That is why I put 8 GBytes in this machine. Right now down to
four due to a memory problem making an error every few days: corrected by
ECC, but I don't like it.
--
.~. Jean-David Beyer Registered Linux User 85642.
/V\ PGP-Key: 9A2FC99A Registered Machine 241939.
/( )\ Shrewsbury, New Jersey http://counter.li.org
^^-^^ 13:15:01 up 4 days, 22:59, 4 users, load average: 4.09, 4.29, 4.50
It is not a 'pure' desktop: it is a desktop that is also a server. Here
having multiple cores/processors can be helpful. A 'pure' desktop would
be a machine that only runs desktop applications, such as, word processing,
an E-Mail client, and so on. Such a machine will see little or no
benifit from multiple cores/processors. As far as I am aware, putting a
Core 2 duo processor in a desktop or laptop that will only be used for
these sorts of apps, is *pure* hyp.
>
> When I ran IBM's DB2 dbms, they ran a disk write process for each hard
> drive, so that that could go in parallel with the other stuff.
Yes, again, we are talking about a server system.
> >
> > The only other point of a desktop system with multiple cores / multiple
> > processors would be if one was running something like multiple instances
> > of SETI-at-home or something thing similar. OR if the desktop system is
> > also being used as a server of some sort (mail, file, IntraNet, DHCP,
> > proxy, etc.) such as for a small home [office] LAN.
>
> Depending on what the machine is doing, you can get some benefit from more
> than one core for ordinary desktop usage. Even if the user's process takes
> 100% of a CPU, the system and any other daemons, can run on another.
Most of the time these daemons are idle (unless the desktop is doing
extra duty as some kind of server). It is rare for a typical desktop
application to take 100% of a CPU for more than a few seconds, with a
modern processor.
>
> I happen to run 4 BOINC processes to use up the waste processor time.
> >
> > Also: the amount of RAM and the speed of the disk can have a large effect
> > on percieved computer speed. Often these factors are more significant
> > then most people realize. Throwing more cores/processors into a memory
> > starved system (or a system with a slow disk), will NOT make it faster.
> >
> Absolutely. That is why I put 8 GBytes in this machine. Right now down to
> four due to a memory problem making an error every few days: corrected by
> ECC, but I don't like it.
>
>
--
> I do not know if mine is a "desktop" or not; it is on the top of my desk.
> One application I run is postgreSQL dbms. While most of its processing uses
> a single processor, its logger, its memory cleanup, and such do use the
> other processors, so there is a benefit to multiple cores.
What is the load average?
If the load average over a long period of time is 4, then a 4-way core
might be useful, as it says 4+ jobs are running.
Likeways a load average of 8+ might find a 8-way core useful.
But if it's 1, then your system isn't loaded enough to benefit form
multi-core.
[snip]
> Note: Most (all?) common desktop applications are single-threaded, and
> most would NOT benifit from being rewritten to be multi-threaded.
Not all; anything which uses xine-lib, for example, is multi-threaded though
the front end code may well not itself spawn any threads.
[snip]
--
| Darren Salt | linux or ds at | nr. Ashington, | Toon
| RISC OS, Linux | youmustbejoking,demon,co,uk | Northumberland | Army
| + Output *more* particulate pollutants. BUFFER AGAINST GLOBAL WARMING.
I'm dangerous when I know what I'm doing.
Thanks a lot for your replies.
@ Maxwell Lol: You were right. I tried running multiple applications
and did see that other cores recieved some more load. However system
is still slower.
@ Steve Wampler: I was using SMP kernel as you rightly decoded. The
result of cat /proc/cpuinfo shows all 4 cores. I can also see all the
cores in sys monitor.
@ Robert Heller: This machine is intended for using softwares like
MATLAB, Emulators running on virtual machines and FPGA synthesis and
compilation softwares. Some of these applications do make use of all 4
cores (as I can see that in sys monitor) but the system is still
slower.
I tried running hdparm and got some very dismal results:
$ sudo /sbin/hdparm -Tt /dev/hda5
/dev/hdb5:
Timing cached reads: 7652 MB in 2.00 seconds = 3828.05 MB/sec
Timing buffered disk reads: 12 MB in 3.07 seconds = 3.90 MB/sec
The hdparm won't let me set the DMA on. However setting IO_support
flag to 3 does double the performance:
$ /sbin/hdparm -X66 -d1 -u1 -m16 -c3 /dev/hda
/dev/hda:
setting 32-bit IO_support flag to 3
setting multcount to 16
setting unmaskirq to 1 (on)
setting using_dma to 1 (on)
HDIO_SET_DMA failed: Operation not permitted
setting xfermode to 66 (UltraDMA mode2)
multcount = 16 (on)
IO_support = 3 (32-bit w/sync)
unmaskirq = 1 (on)
using_dma = 0 (off)
$ /sbin/hdparm -Tt /dev/hda
/dev/hda:
Timing cached reads: 7664 MB in 2.00 seconds = 3833.37 MB/sec
Timing buffered disk reads: 24 MB in 3.17 seconds = 7.56 MB/sec
------------
My other P4 machine (which has DMA flag on in hdparm) has a Test Read
performance of 34 MB/sec.
I had 2.6.18-92.1.17.el5PAE kernel on CentOS 5.2. On suggestions from
a few other users, I upgraded it to 2.6.22.19 (I can't use kernels
later than this as some of my applications won't support that).
However the results of hdparm were same.
Any pointers about how can I improve the performance? Any help would
be greatly appreciated.
Sincerely,
Kumar Vijay Mishra.
> The hdparm won't let me set the DMA on. However setting IO_support
Check the bios setup program, to see if any of the cmos settings are forcing
pio mode.
Regards, Dave Hodgins
--
Change nomail.afraid.org to ody.ca to reply by email.
(nomail.afraid.org has been set up specifically for
use in usenet. Feel free to use it yourself.)
Thanks David. I checked the CMOS settings but couldn't find any such
property. Some of the HDD related properties that I found are as
follows:
HDD S.M.A.R.T. Capability Enabled
SATA RAID/AHCI Mode Disabled
SATA Port0-3 Native Mode Disabled
Onboard IDE Controller Enabled
--There wasn't any PIO Mode or so property.
The speed of the hard disk should be better than this really, The
problem you have happened to me before with HP system and fedora core
6, In the beginning really I thought that my hard disk isn't working
fine and thought to replace it, And when I changed it this didn't help
anyway, It was a driver issue, When I updated to fedora 9 it went
really fine and working fast and fine tell now, The hard disk worked
properly, Try some commands I think they will help you to have more
information about the issue,
dmesg |grep sda for example
lspci -v
As I can remember the driver issue wasn't related to the hard disk it
self, It was related to the board IDE subsystem,
You may try smartmontools, It should give you more information about
the status of the hard disk.
Regards,
Finally, I could resolve the issue :-)
1. In BIOS, I enabled "SATA Port0-3 Native Mode" while keeping "SATA
RAID/AHCI Mode" disabled. When I enabled both of them or only the
latter, it lead to "Kernel Panic". Following this, the performance was
very fast (and hdparm showed "sda" instead of "hda").
2. Before this, I tried upgrading/building/compiling my kernel to
2.6.23, 2.6.23.7 and 2.6.24, in that order, and checking the hard-disk
performance. It didn't improve.
3. I also tried changing BIOS settings (i.e. enabling both "SATA RAID/
AHCI Mode" and "SATA Port0-3 Native Mode" or only the former) when I
upgraded to these kernels. However, I didn't try enabling only "SATA
Port 0-3 Native Mode" (that idea clicked me later when I reverted back
to my original kernel: 2.6.18).
4. So I reverted n\back to my original kernel and enabled Native Mode
for SATA and got the performance that I wanted. Here are the results:
$ uname -a
Linux abc.def.ghi.jkl 2.6.18-92.1.18.el5PAE #1 SMP i686 i686 i386 GNU/
Linux
$ sudo /sbin/hdparm /dev/sda
/dev/sda:
IO_support = 0 (default 16-bit)
readonly = 0 (off)
readahead = 256 (on)
geometry = 38913/255/63, sectors = 625140335, start = 0
$ sudo /sbin/hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 7872 MB in 2.00 seconds = 3938.86 MB/sec
Timing buffered disk reads: 226 MB in 3.02 seconds = 74.90 MB/sec
$ sudo /sbin/lspci
00:00.0 Host bridge: Intel Corporation Eaglelake DRAM Controller (rev
02)
00:01.0 PCI bridge: Intel Corporation Eaglelake PCI Express Root Port
(rev 02)
00:1a.0 USB Controller: Intel Corporation ICH10 USB UHCI Controller #4
00:1a.1 USB Controller: Intel Corporation ICH10 USB UHCI Controller #5
00:1a.2 USB Controller: Intel Corporation ICH10 USB UHCI Controller #6
00:1a.7 USB Controller: Intel Corporation ICH10 USB2 EHCI Controller
#2
00:1b.0 Audio device: Intel Corporation ICH10 HD Audio Controller
00:1c.0 PCI bridge: Intel Corporation ICH10 PCI Express Port 1
00:1c.3 PCI bridge: Intel Corporation ICH10 PCI Express Port 4
00:1c.4 PCI bridge: Intel Corporation ICH10 PCI Express Port 5
00:1c.5 PCI bridge: Intel Corporation ICH10 PCI Express Port 6
00:1d.0 USB Controller: Intel Corporation ICH10 USB UHCI Controller #1
00:1d.1 USB Controller: Intel Corporation ICH10 USB UHCI Controller #2
00:1d.2 USB Controller: Intel Corporation ICH10 USB UHCI Controller #3
00:1d.7 USB Controller: Intel Corporation ICH10 USB2 EHCI Controller
#1
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev 90)
00:1f.0 ISA bridge: Intel Corporation ICH10 LPC Interface Controller
00:1f.2 IDE interface: Intel Corporation ICH10 4 port SATA IDE
Controller
00:1f.3 SMBus: Intel Corporation ICH10 SMBus Controller
00:1f.5 IDE interface: Intel Corporation ICH10 2 port SATA IDE
Controller
01:00.0 VGA compatible controller: nVidia Corporation Unknown device
06e4 (rev a1)
03:00.0 IDE interface: JMicron Technologies, Inc. JMB368 IDE
controller
04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd.
RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)
05:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd.
RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02)
06:07.0 FireWire (IEEE 1394): Texas Instruments TSB43AB23
IEEE-1394a-2000 Controller (PHY/Link)
$ sudo /usr/sbin/lshw -short
H/W path Device Class Description
=====================================================
system EP45-DS3R
/0 bus EP45-DS3R
/0/0 memory 128KiB BIOS
/0/4 processor Intel(R) Core(TM)2 Quad
CPU Q9300
/0/4/a memory 64KiB L1 cache
/0/4/b memory 3MiB L2 cache
/0/4/1.1 processor Logical CPU
/0/4/1.2 processor Logical CPU
/0/4/1.3 processor Logical CPU
/0/4/1.4 processor Logical CPU
/0/19 memory 4GiB System Memory
/0/19/0 memory 2GiB DIMM 800 MHz (1.2 ns)
/0/19/1 memory DIMM [empty]
/0/19/2 memory 2GiB DIMM 800 MHz (1.2 ns)
/0/19/3 memory DIMM [empty]
/0/1 processor
/0/1/1.1 processor Logical CPU
/0/1/1.2 processor Logical CPU
/0/1/1.3 processor Logical CPU
/0/1/1.4 processor Logical CPU
/0/2 processor
/0/2/1.1 processor Logical CPU
/0/2/1.2 processor Logical CPU
/0/2/1.3 processor Logical CPU
/0/2/1.4 processor Logical CPU
/0/3 processor
/0/3/1.1 processor Logical CPU
/0/3/1.2 processor Logical CPU
/0/3/1.3 processor Logical CPU
/0/3/1.4 processor Logical CPU
/0/100 bridge Eaglelake DRAM Controller
/0/100/1 bridge Eaglelake PCI Express Root
Port
/0/100/1/0 display nVidia Corporation
/0/100/1a bus ICH10 USB UHCI Controller #4
/0/100/1a/1 usb3 bus UHCI Host Controller
/0/100/1a.1 bus ICH10 USB UHCI Controller #5
/0/100/1a.1/1 usb4 bus UHCI Host Controller
/0/100/1a.2 bus ICH10 USB UHCI Controller #6
/0/100/1a.2/1 usb5 bus UHCI Host Controller
/0/100/1a.7 bus ICH10 USB2 EHCI Controller
#2
/0/100/1a.7/1 usb1 bus EHCI Host Controller
/0/100/1b multimedia ICH10 HD Audio Controller
/0/100/1c bridge ICH10 PCI Express Port 1
/0/100/1c.3 bridge ICH10 PCI Express Port 4
/0/100/1c.3/0 storage JMB368 IDE controller
/0/100/1c.4 bridge ICH10 PCI Express Port 5
/0/100/1c.4/0 eth0 network RTL8111/8168B PCI Express
Gigabit Ethe
/0/100/1c.5 bridge ICH10 PCI Express Port 6
/0/100/1c.5/0 eth1 network RTL8111/8168B PCI Express
Gigabit Ethe
/0/100/1d bus ICH10 USB UHCI Controller #1
/0/100/1d/1 usb6 bus UHCI Host Controller
/0/100/1d.1 bus ICH10 USB UHCI Controller #2
/0/100/1d.1/1 usb7 bus UHCI Host Controller
/0/100/1d.2 bus ICH10 USB UHCI Controller #3
/0/100/1d.2/1 usb8 bus UHCI Host Controller
/0/100/1d.2/1/1 input USB Keyboard
/0/100/1d.2/1/2 input Microsoft 3-Button Mouse
with IntelliE
/0/100/1d.7 bus ICH10 USB2 EHCI Controller
#1
/0/100/1d.7/1 usb2 bus EHCI Host Controller
/0/100/1e bridge 82801 PCI Bridge
/0/100/1e/7 bus TSB43AB23 IEEE-1394a-2000
Controller (
/0/100/1f bridge ICH10 LPC Interface
Controller
/0/100/1f.2 scsi0 storage ICH10 4 port SATA IDE
Controller
/0/100/1f.2/0 /dev/sda disk 320GB ST3320620AS
/0/100/1f.2/0/1 /dev/sda1 volume 101MiB EXT3 volume
/0/100/1f.2/0/2 /dev/sda2 volume 19GiB EXT3 volume
/0/100/1f.2/0/3 /dev/sda3 volume 4094MiB Linux swap volume
/0/100/1f.2/0/4 /dev/sda4 volume 274GiB Extended partition
/0/100/1f.2/0/4/5 /dev/sda5 volume 274GiB Linux filesystem
partition
/0/100/1f.2/1 disk DVD RW AD-7200S
/0/100/1f.3 bus ICH10 SMBus Controller
/0/100/1f.5 storage ICH10 2 port SATA IDE
Controller
$ sudo /sbin/lsmod
Module Size Used by
ppdev 12613 0
autofs4 24517 2
nfs 228801 2
lockd 61129 2 nfs
fscache 20321 1 nfs
nfs_acl 7617 1 nfs
vmnet 54996 13
vsock 28992 0
vmci 57952 1 vsock
vmmon 74304 0
sunrpc 144893 4 nfs,lockd,nfs_acl
ip_conntrack_netbios_ns 6977 0
ipt_REJECT 9537 1
xt_state 6209 2
ip_conntrack 53025 2 ip_conntrack_netbios_ns,xt_state
nfnetlink 10713 1 ip_conntrack
xt_tcpudp 7105 4
iptable_filter 7105 1
ip_tables 17029 1 iptable_filter
x_tables 17349 4
ipt_REJECT,xt_state,xt_tcpudp,ip_tables
cpufreq_ondemand 12493 4
acpi_cpufreq 12485 1
dm_mirror 29253 0
dm_multipath 22089 0
dm_mod 61661 2 dm_mirror,dm_multipath
video 21193 0
sbs 18533 0
backlight 10049 1 video
i2c_ec 9025 1 sbs
button 10705 0
battery 13637 0
asus_acpi 19289 0
ac 9157 0
ipv6 258145 27
xfrm_nalgo 13765 1 ipv6
crypto_api 11969 1 xfrm_nalgo
lp 15849 0
snd_hda_intel 24793 1
snd_hda_codec 210881 1 snd_hda_intel
snd_seq_dummy 7877 0
snd_seq_oss 32577 0
snd_seq_midi_event 11073 1 snd_seq_oss
snd_seq 49585 5
snd_seq_dummy,snd_seq_oss,snd_seq_midi_event
snd_seq_device 11725 3 snd_seq_dummy,snd_seq_oss,snd_seq
snd_pcm_oss 42945 0
snd_mixer_oss 19009 1 snd_pcm_oss
snd_pcm 72133 3
snd_hda_intel,snd_hda_codec,snd_pcm_oss
snd_timer 24517 2 snd_seq,snd_pcm
snd 52421 11
snd_hda_intel,snd_hda_codec,snd_seq_oss,snd_seq,snd_seq_device,snd_pcm_oss,snd_mixer_oss,snd_pcm,snd_timer
sr_mod 19941 0
soundcore 11553 1 snd
nvidia 6899152 26
cdrom 36705 1 sr_mod
parport_pc 29157 1
snd_page_alloc 14281 2 snd_hda_intel,snd_pcm
i2c_i801 11597 0
parport 37513 3 ppdev,lp,parport_pc
i2c_core 23745 3 i2c_ec,nvidia,i2c_i801
r8169 33097 0
sg 36061 0
pcspkr 7105 0
ata_piix 22341 4
libata 144125 1 ata_piix
sd_mod 24897 5
scsi_mod 134605 4 sr_mod,sg,libata,sd_mod
ext3 123593 3
jbd 56553 1 ext3
uhci_hcd 25549 0
ohci_hcd 23389 0
ehci_hcd 33613 0
------------------------------------------------
Thanks to all who helped.
Sincerely,
Kumar Vijay Mishra.
Thanks habibielwa7id. Even though I witnessed the HD performance going
up 25x, I don't know if it can be bettered further. The datasheet for
the hard disk promises a performance of 78 MB/s max (= "maximum
sustained data transfer rate" property in the datasheet), while hdparm
shows the performance around 75 MB/s. Moreover, I got these figures
while retaining my original kernel 2.6.18. Would the performance be
even better if I upgrade the kernel (though I am satisfied with the
current figures)?
Here are the outputs of the commands you suggested (for lspci, see my
previous post):
$ dmesg |grep sda
SCSI device sda: 625140335 512-byte hdwr sectors (320072 MB)
sda: Write Protect is off
sda: Mode Sense: 00 3a 00 00
SCSI device sda: drive cache: write back
SCSI device sda: 625140335 512-byte hdwr sectors (320072 MB)
sda: Write Protect is off
sda: Mode Sense: 00 3a 00 00
SCSI device sda: drive cache: write back
sda: sda1 sda2 sda3 sda4 < sda5 >
sd 0:0:0:0: Attached scsi disk sda
EXT3 FS on sda2, internal journal
EXT3 FS on sda5, internal journal
EXT3 FS on sda1, internal journal
Adding 4192956k swap on /dev/sda3. Priority:-1 extents:1 across:
4192956k
Sincerely,
Kumar Vijay Mishra.
I think the speed you could obtained later is good, This is a good
speed as I saw many systems around this speed, May I say some thing
here,
I really see that the speed of the not original computers is better
in general than the original computers, This doesn't mean that the not
original computers are better than the original computers, I think the
original computers more stable and safer And can carry more hard work,
But the only advantage of the not original computers as I saw many
times is the speed,
for example I saw a speed of 100 MB with sheep and not original pcs,
hahah Although I work on expensive original servers but I havn't seen
this speed with these computers,
By the way, Don't measure the speed of your pc while it's under heavy
load, It will be worse of course if you compare it with the output of
the hdparm utility when your pc isn't under a heavy load.
Regards,