[Scribe - HDFS] OutOfMemoryError: unable to create new native thread

39 views
Skip to first unread message

GK

unread,
Jul 6, 2010, 6:23:15 AM7/6/10
to Scribe Server
Hi all,

I gave a long run for more than 24 hours of scribe writing
messages to HDFS. my scribe client sends 10000 messages / sec to
Scribe server. After sometime I got an outOfMemoryException which is
seen in Scribe server logs..

Tue Jul 6 07:20:01 2010] "[hdfs] ERROR: HDFS is not configured for
file: hdfs://system1140:9000/"
[Tue Jul 6 07:20:01 2010] "[hdfs] Connecting to HDFS"
[Tue Jul 6 07:20:01 2010] "[hdfs] Before
hdfsConnectNewInstance(system1140, 9000)"
Exception in thread "Thread-2" java.lang.OutOfMemoryError: unable to
create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:597)
at java.lang.UNIXProcess$1.run(UNIXProcess.java:141)
at java.security.AccessController.doPrivileged(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:103)
at java.lang.ProcessImpl.start(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:149)
at org.apache.hadoop.util.Shell.run(Shell.java:134)
at org.apache.hadoop.util.Shell
$ShellCommandExecutor.execute(Shell.java:286)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:354)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:337)
at
org.apache.hadoop.security.UnixUserGroupInformation.executeShellCommand(UnixUserGroupInformation.java:
353)
at
org.apache.hadoop.security.UnixUserGroupInformation.getUnixUserName(UnixUserGroupInformation.java:
332)
at
org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:
246)
at
org.apache.hadoop.security.UnixUserGroupInformation.login(UnixUserGroupInformation.java:
301)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:287)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:261)
at
org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:
90)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:
1417)
at org.apache.hadoop.fs.FileSystem.access$1(FileSystem.java:1410)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:
1444)
at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:
1438)
at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:220)


From this it is clearly seen that Scribe ran out of threads...
Any idea how to avoid this? or anyone faced this before???

Appreciated help.
Reply all
Reply to author
Forward
0 new messages