Hadoop DataNode抛出java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write, 求解.

1,092 views
Skip to first unread message

dongyajun

unread,
Mar 3, 2010, 9:18:55 PM3/3/10
to hadooper_cn
该异常从我的机子配置好以来,一直抛出至今,哪位可以解释一下,抛出该异常的原因所在。另外我的环境和附加信息如下:

1. 测试节点两台, Replication 为 2.
2. JDK版本为1.7(当初使用1.6.04以上的jdk, 配合hadoop-0.20.1使用时, 进程经常100%.)
3. 我的这两台节点,每台已有三块硬盘已经写满。 (不过该异常在每个硬盘都可写的情况下, 仍然抛出这个异常)
4. 系统在不停的put和get数据.
5. hadoop-0.19.2

在get时, 一般客户端都会存在于ND本身, 在下面的异常里可以看到实际上是两个端口间在传输数据。

异常:

2010-03-04 09:59:07,733 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.3.26:50010, storageID=DS-620362217-192.168.3.26-50010-1255401356858, infoPort=50075, ipcPort=50020):Got exception while serving blk_6463950199377699840_215007 to /192.168.3.26:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/192.168.3.26:50010 remote=/192.168.3.26:48285]
    at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:250)
    at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
    at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
    at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:313)
    at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:400)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:180)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:95)
    at java.lang.Thread.run(Thread.java:717)

2010-03-04 09:59:07,733 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(192.168.3.26:50010, storageID=DS-620362217-192.168.3.26-50010-1255401356858, infoPort=50075, ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/192.168.3.26:50010 remote=/192.168.3.26:48285]
    at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:250)
    at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
    at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
    at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:313)
    at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:400)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:180)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:95)
    at java.lang.Thread.run(Thread.java:717)

先谢谢。

--
With a good team all things are possible

Zheng Shao

unread,
Mar 3, 2010, 11:14:57 PM3/3/10
to hadoo...@googlegroups.com
这个问题我遇到过。后来发现和OS有关。

你可以写一个简单的c程序,不停往一个文件里写数据,写上100GB。每次写入1MB就输出一下。可以发现有时候这个程序会block很久。block的时候os在flush
data到硬盘。我在Fedora Core 4上遇到过这个问题,但是在CentOS 5上从来没有遇到过。你可以测试一下。

See http://www.westnet.com/~gsmith/content/linux-pdflush.htm

Zheng

2010/3/3 dongyajun <dong...@gmail.com>:

> --
> You received this message because you are subscribed to the Google Groups
> "Hadoop In China" group.
> To post to this group, send email to hadoo...@googlegroups.com.
> To unsubscribe from this group, send email to
> hadooper_cn...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/hadooper_cn?hl=en.
>

--
Yours,
Zheng

dongyajun

unread,
Mar 3, 2010, 11:28:04 PM3/3/10
to hadoo...@googlegroups.com
谢谢Shao.

我刚找到了一个关于该异常的issue, 然后我看了一下hadoop的代码.

在hadoop-0.17之前, Datanode 使用阻塞模式的socket向dfs client 发送数据. 然而在版本0.17之后, hadoop引入了非阻塞模式的socket 发送数据. 在0.17之后, 这问题的产生可能由于多种原因.

1. 像您刚才所说的OS有关.
2. 我觉得还有JVM版本,实际上在jdk1.6 u04 之前jdk是在NIO的处理上是有问题的。

http://issues.apache.org/jira/browse/HADOOP-3831

该issue, 把 dfs.datanode.socket.write.timeout 参数项设置为0, 实际上就是使用了hadoop-0.16的模式.刚才我更新了此项, 发现到现在没有抛出异常.

我的系统: Linux 2.6.18-92.el5PAE #1 SMP Tue Apr 29 13:31:02 EDT 2008 i686 i686 i386 GNU/Linux

之后我会另用不同版本的linux再次尝试测试该问题。 再次感谢。

2010/3/4 Zheng Shao <zsh...@gmail.com>

dongyajun

unread,
Mar 3, 2010, 11:31:31 PM3/3/10
to hadoo...@googlegroups.com
os: Red Hat Enterprise Linux Server release 5.2 (Tikanga)

2010/3/4 dongyajun <dong...@gmail.com>

gavingeng

unread,
Sep 19, 2012, 3:38:01 AM9/19/12
to hadoo...@googlegroups.com
我现在也遇到这个问题,就是用flume来采集日志,我这边用ab去压测服务,foreach循环请求量大且多的时候就会遇到这个问题,导致在做解析是无法做解析,文件损坏

os: Linux xxxx 2.6.32-41-server #88-Ubuntu SMP Thu Mar 29 14:32:47 UTC 2012 x86_64 GNU/Linux

feng lu

unread,
Sep 19, 2012, 4:05:18 AM9/19/12
to hadoo...@googlegroups.com
DN有一个同时处理文件个数的上限,通过在hdfs-site.xml中配置如下来实现
    <property>
        <name>dfs.datanode.max.xcievers</name>
        <value>4096</value>
      </property>

你再看一下你的jvm进程的最大同时打开的文件句柄数,是多少。



--
You received this message because you are subscribed to the Google Groups "Hadoop In China" group.
To view this discussion on the web visit https://groups.google.com/d/msg/hadooper_cn/-/AGoUKgr72-8J.

To post to this group, send email to hadoo...@googlegroups.com.
To unsubscribe from this group, send email to hadooper_cn...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/hadooper_cn?hl=en.



--
Don't Grow Old, Grow Up... :-)

feng lu

unread,
Sep 19, 2012, 4:06:36 AM9/19/12
to hadoo...@googlegroups.com

上面的可以参考一下。

On Wed, Sep 19, 2012 at 3:38 PM, gavingeng <gaving...@gmail.com> wrote:

--
You received this message because you are subscribed to the Google Groups "Hadoop In China" group.
To view this discussion on the web visit https://groups.google.com/d/msg/hadooper_cn/-/AGoUKgr72-8J.
To post to this group, send email to hadoo...@googlegroups.com.
To unsubscribe from this group, send email to hadooper_cn...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/hadooper_cn?hl=en.
Reply all
Reply to author
Forward
0 new messages