你可以写一个简单的c程序,不停往一个文件里写数据,写上100GB。每次写入1MB就输出一下。可以发现有时候这个程序会block很久。block的时候os在flush
data到硬盘。我在Fedora Core 4上遇到过这个问题,但是在CentOS 5上从来没有遇到过。你可以测试一下。
See http://www.westnet.com/~gsmith/content/linux-pdflush.htm
Zheng
2010/3/3 dongyajun <dong...@gmail.com>:
> --
> You received this message because you are subscribed to the Google Groups
> "Hadoop In China" group.
> To post to this group, send email to hadoo...@googlegroups.com.
> To unsubscribe from this group, send email to
> hadooper_cn...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/hadooper_cn?hl=en.
>
--
Yours,
Zheng
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>--
You received this message because you are subscribed to the Google Groups "Hadoop In China" group.
To view this discussion on the web visit https://groups.google.com/d/msg/hadooper_cn/-/AGoUKgr72-8J.
To post to this group, send email to hadoo...@googlegroups.com.
To unsubscribe from this group, send email to hadooper_cn...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/hadooper_cn?hl=en.
--
You received this message because you are subscribed to the Google Groups "Hadoop In China" group.
To view this discussion on the web visit https://groups.google.com/d/msg/hadooper_cn/-/AGoUKgr72-8J.
To post to this group, send email to hadoo...@googlegroups.com.
To unsubscribe from this group, send email to hadooper_cn...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/hadooper_cn?hl=en.