I have a very strange phenomena concerning read performance as tested
with iozone
(command line: iozone -Ra -n 16G -y 16384k -g 16G -i 0 -i 1):
When running over Infiniband (gen1, openib LND), I get 400MB/s write
but only 160MB/s read. This is already a pretty low read speed
compared to write. However, it becomes much worse, if I run the same
benchmark over GigE: Here I get 105MB/s write (excellent), and only 17MB/s!!!
read. I already tried increasing
/proc/fs/lustre/llite/fs0/max_read_ahead_mb without much success
(changed from 14 to 17MB/s). Any idea how this could be improved?
The configuration is as follows:
- 1 MDS + 2 OSS with 2 OSTs each.
- Kernel 2.6.15.7 with Lustre 1.4.6.1 on both clients and servers.
Thanks,
Roland
_______________________________________________
Lustre-discuss mailing list
Lustre-...@clusterfs.com
https://mail.clusterfs.com/mailman/listinfo/lustre-discuss
take max_read_ahead_mb down to ~2 MiB, 1.4.6.1 had several problems with it's
read-ahead logic that killed read performance (for default and larger
vaules).
/Peter
Peter> On Tuesday 09 January 2007 14:10, Roland Fehrenbacher
Peter> wrote:
>> Hi,
>>
>> I have a very strange phenomena concerning read performance as
>> tested with iozone (command line: iozone -Ra -n 16G -y 16384k
>> -g 16G -i 0 -i 1):
>>
>> When running over Infiniband (gen1, openib LND), I get 400MB/s
>> write but only 160MB/s read. This is already a pretty low read
>> speed compared to write. However, it becomes much worse, if I
>> run the same benchmark over GigE: Here I get 105MB/s write
>> (excellent), and only 17MB/s!!! read. I already tried
>> increasing /proc/fs/lustre/llite/fs0/max_read_ahead_mb without
>> much success (changed from 14 to 17MB/s). Any idea how this
>> could be improved?
Peter> take max_read_ahead_mb down to ~2 MiB, 1.4.6.1 had several
Peter> problems with it's read-ahead logic that killed read
Peter> performance (for default and larger vaules).
Thanks for the hint, Peter. Unfortunately setting this to 2MB (echo 2
> /proc/fs/lustre/llite/fs8/max_read_ahead_mb) resulted in even poorer
performance of approx. 10MB/s. Any other ideas?
Roland
>> The configuration is as follows:
>>
>> - 1 MDS + 2 OSS with 2 OSTs each. - Kernel 2.6.15.7 with
>> Lustre 1.4.6.1 on both clients and servers.
_______________________________________________
Sorry about that, your description sounded very close to what I saw a few
months ago but that wasn't quite it I guess. As for more ideas, I'd try to
upgrade lustre to latest 1.4.x since I know alot of things regarding
read-ahead was fixed/changed in 1.4.7 (see changelog).
Good luck,
Peter
Peter> On Tuesday 09 January 2007 15:38, Roland Fehrenbacher
Peter> wrote: ...
>> >> increasing /proc/fs/lustre/llite/fs0/max_read_ahead_mb
>> without >> much success (changed from 14 to 17MB/s). Any idea
>> how this >> could be improved?
>>
Peter> take max_read_ahead_mb down to ~2 MiB, 1.4.6.1 had several
Peter> problems with it's read-ahead logic that killed read
Peter> performance (for default and larger vaules).
>> Thanks for the hint, Peter. Unfortunately setting this to 2MB
>> (echo 2
>>
>> > /proc/fs/lustre/llite/fs8/max_read_ahead_mb) resulted in even
>> poorer
>>
>> performance of approx. 10MB/s. Any other ideas?
Peter> Sorry about that, your description sounded very close to
Peter> what I saw a few months ago but that wasn't quite it I
Peter> guess. As for more ideas, I'd try to upgrade lustre to
Peter> latest 1.4.x since I know alot of things regarding
Peter> read-ahead was fixed/changed in 1.4.7 (see changelog).
On a test setup, I have 1.4.7 running, and there read performance is
really on par with write. Unfortunately, the production cluster cannot
be upgraded in the near future, so another solution would be helpful.
Thanks again,
Roland