Unable to create new native thread

103 views
Skip to first unread message

Asra Yousuf

unread,
Mar 6, 2017, 10:22:30 AM3/6/17
to Druid User
Hi,

During hadoop batch ingestion, some of the tasks are failing with the following exceptions:

[WARN ] 2017-03-06 12:32:10.216 [main] AbstractLifeCycle - FAILED qtp965138756{FAILED,66<=13<=66,i=12,q=0}: java.lang.OutOfMemoryError: unable to create new native thread

java.lang.OutOfMemoryError: unable to create new native thread

        at java.lang.Thread.start0(Native Method) ~[?:1.7.0_79]

        at java.lang.Thread.start(Thread.java:714) ~[?:1.7.0_79]

        at org.eclipse.jetty.util.thread.QueuedThreadPool.startThreads(QueuedThreadPool.java:430) ~[jetty-util-9.2.5.v20141112.jar:9.2.5.v20141112]

        at org.eclipse.jetty.util.thread.QueuedThreadPool.doStart(QueuedThreadPool.java:104) ~[jetty-util-9.2.5.v20141112.jar:9.2.5.v20141112]

        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [jetty-util-9.2.5.v20141112.jar:9.2.5.v20141112]

        at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132) [jetty-util-9.2.5.v20141112.jar:9.2.5.v20141112]

        at org.eclipse.jetty.server.Server.start(Server.java:387) [jetty-server-9.2.5.v20141112.jar:9.2.5.v20141112]

        at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114) [jetty-util-9.2.5.v20141112.jar:9.2.5.v20141112]

        at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61) [jetty-server-9.2.5.v20141112.jar:9.2.5.v20141112]

        at org.eclipse.jetty.server.Server.doStart(Server.java:354) [jetty-server-9.2.5.v20141112.jar:9.2.5.v20141112]

        at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68) [jetty-util-9.2.5.v20141112.jar:9.2.5.v20141112]

        at io.druid.server.initialization.jetty.JettyServerModule$1.start(JettyServerModule.java:204) [druid-server-0.9.2.jar:0.9.2]

        at com.metamx.common.lifecycle.Lifecycle.start(Lifecycle.java:259) [java-util-0.27.10.jar:?]

        at io.druid.guice.LifecycleModule$2.start(LifecycleModule.java:155) [druid-api-0.9.2.jar:0.9.2]

        at io.druid.cli.GuiceRunnable.initLifecycle(GuiceRunnable.java:101) [druid-services-0.9.2.jar:0.9.2]

        at io.druid.cli.CliPeon.run(CliPeon.java:274) [druid-services-0.9.2.jar:0.9.2]

        at io.druid.cli.Main.main(Main.java:106) [druid-services-0.9.2.jar:0.9.2]



In some instance, the following logs are also being written:

[ERROR] 2017-03-06 02:00:49.813 [main] CliPeon - Error when starting up.  Failing.

java.lang.OutOfMemoryError: unable to create new native thread

        at java.lang.Thread.start0(Native Method) ~[?:1.7.0_79]

        at java.lang.Thread.start(Thread.java:714) ~[?:1.7.0_79]

        at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949) ~[?:1.7.0_79]

        at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1371) ~[?:1.7.0_79]

        at org.jboss.netty.util.internal.DeadLockProofWorker.start(DeadLockProofWorker.java:38) ~[netty-3.10.4.Final.jar:?]

        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.openSelector(AbstractNioSelector.java:368) ~[netty-3.10.4.Final.jar:?]

        at org.jboss.netty.channel.socket.nio.AbstractNioSelector.<init>(AbstractNioSelector.java:100) ~[netty-3.10.4.Final.jar:?]

        at org.jboss.netty.channel.socket.nio.AbstractNioWorker.<init>(AbstractNioWorker.java:52) ~[netty-3.10.4.Final.jar:?]

        at org.jboss.netty.channel.socket.nio.NioWorker.<init>(NioWorker.java:45) ~[netty-3.10.4.Final.jar:?]

        at org.jboss.netty.channel.socket.nio.NioWorkerPool.newWorker(NioWorkerPool.java:44) ~[netty-3.10.4.Final.jar:?]

        at org.jboss.netty.channel.socket.nio.NioWorkerPool.newWorker(NioWorkerPool.java:28) ~[netty-3.10.4.Final.jar:?]

        at org.jboss.netty.channel.socket.nio.AbstractNioWorkerPool.init(AbstractNioWorkerPool.java:80) ~[netty-3.10.4.Final.jar:?]

        at org.jboss.netty.channel.socket.nio.NioWorkerPool.<init>(NioWorkerPool.java:39) ~[netty-3.10.4.Final.jar:?]

        at com.metamx.http.client.HttpClientInit.createBootstrap(HttpClientInit.java:133) ~[http-client-1.0.4.jar:?]

        at com.metamx.http.client.HttpClientInit.createClient(HttpClientInit.java:85) ~[http-client-1.0.4.jar:?]

        at io.druid.guice.http.HttpClientModule$HttpClientProvider.get(HttpClientModule.java:116) ~[druid-server-0.9.2.jar:0.9.2]

        at io.druid.guice.http.HttpClientModule$HttpClientProvider.get(HttpClientModule.java:86) ~[druid-server-0.9.2.jar:0.9.2]

        at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:81) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.InternalFactoryToInitializableAdapter.provision(InternalFactoryToInitializableAdapter.java:53) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:61) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.InternalFactoryToInitializableAdapter.get(InternalFactoryToInitializableAdapter.java:45) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:40) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.SingletonScope$1.get(SingletonScope.java:194) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:41) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:110) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.InjectorImpl$2$1.call(InjectorImpl.java:1019) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.InjectorImpl$2.get(InjectorImpl.java:1015) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.InjectorImpl.getInstance(InjectorImpl.java:1050) ~[guice-4.1.0.jar:?]

        at io.druid.guice.PolyBind$ConfiggedProvider.get(PolyBind.java:178) ~[druid-api-0.9.2.jar:0.9.2]

        at com.google.inject.internal.ProviderInternalFactory.provision(ProviderInternalFactory.java:81) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.InternalFactoryToInitializableAdapter.provision(InternalFactoryToInitializableAdapter.java:53) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.ProviderInternalFactory.circularGet(ProviderInternalFactory.java:61) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.InternalFactoryToInitializableAdapter.get(InternalFactoryToInitializableAdapter.java:45) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.SingleParameterInjector.inject(SingleParameterInjector.java:38) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.SingleParameterInjector.getAll(SingleParameterInjector.java:62) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.ConstructorInjector.provision(ConstructorInjector.java:110) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.ConstructorInjector.construct(ConstructorInjector.java:90) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:268) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:46) ~[guice-4.1.0.jar:?]

        at com.google.inject.internal.InjectorImpl.callInContext(InjectorImpl.java:1092) ~[guice-4.1.0.jar:?]

The common.runtime.properties is:


#

# Extensions

#


# This is not the full list of Druid extensions, but common ones that people often use. You may need to change this list

# based on your particular setup.



druid.extensions.loadList=["druid-hdfs-storage","mysql-metadata-storage","druid-histogram"]

druid.extensions.directory=/opt/druid/druid/extensions

druid.extensions.hadoopDependenciesDir=/opt/druid/hadoop-dependencies


# If you have a different version of Hadoop, place your Hadoop client jar files in your hadoop-dependencies directory

# and uncomment the line below to point to your directory.

#

# Logging

#


# Log all runtime properties on startup. Disable to avoid logging properties on startup:

druid.startup.logging.logProperties=true


#

# Zookeeper

#


druid.zk.service.host=10.32.175.12:2181

druid.zk.paths.base=/druid


#

# Metadata storage

#


# For MySQL:

druid.metadata.storage.type=mysql

druid.metadata.storage.connector.connectURI=jdbc:mysql://10.32.176.10:3306/druid

druid.metadata.storage.connector.user=druid

druid.metadata.storage.connector.password=diurd


#

# Deep storage

#


# For HDFS (make sure to include the HDFS extension and that your Hadoop config files in the cp):

druid.storage.type=hdfs

druid.storage.storageDirectory=hdfs://stampy/druid/deepstorage


# Caching

druid.cache.type=memcached

druid.cache.hosts=ccg22memcache025794.ccg22.lvs.com:11211

druid.cache.expiration=2147483647

druid.cache.memcachedPrefix=druid

druid.cache.maxOperationQueueSize=1073741824

druid.cache.readBufferSize=10485760


#

# Service discovery

#


druid.selectors.indexing.serviceName=druid/overlord

druid.selectors.coordinator.serviceName=druid/coordinator



And the middlemanager properties are: 

druid.service=druid/middleManager

druid.port=8091


# Number of tasks per middleManager

druid.worker.capacity=4


# Task launch parameters

druid.indexer.runner.javaOpts=-server -Xmx3g -Duser.timezone=UTC -Dfile.encoding=UTF-8 -XX:MaxPermSize=512m -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenSweepingEnabled -XX:+UseG1GC -XX:MaxGCPauseMillis=100 -XX:+PrintGCDetails -XX:+PrintGCTimeStamps


# Task Logging

druid.indexer.logs.directory=/mnt/log/tasks


# Processing threads and buffers

druid.processing.buffer.sizeBytes=536870912

druid.processing.numThreads=2


# Hadoop indexing

druid.indexer.task.hadoopWorkingPath=hdfs://stampy/druid/workingPath

druid.indexer.task.defaultHadoopCoordinates=["org.apache.hadoop:hadoop-client:2.6.0"]

druid.indexer.fork.property.druid.indexer.task.hadoopWorkingPath=hdfs://stampy/druid/workingPath



The middle manager is running on a 32 CPU box with 236 GB RAM

Thanks,

Asra

Reply all
Reply to author
Forward
0 new messages