hadoop jar h2odriver_horton.jar water.hadoop.h2odriver -libjars ../h2o.jar -Dmapred.job.queue.name=hdmi-set -driverif 10.115.201.59 -timeout 1800 -mapperXmx 1g -nodes 2 -output hdfsOutputDirName
13/10/17 08:51:14 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/10/17 08:51:14 INFO security.JniBasedUnixGroupsMapping: Using JniBasedUnixGroupsMapping for Group resolution
Using mapper->driver callback IP address and port: 10.115.201.59:34389
(You can override these with -driverif and -driverport.)
Driver program compiled with MapReduce V1 (Classic)
Memory Settings:
mapred.child.java.opts: -Xms1g -Xmx1g
mapred.map.child.java.opts: -Xms1g -Xmx1g
Extra memory percent: 10
mapreduce.map.memory.mb: 1126
Job name 'H2O_61026' submitted
JobTracker job ID is 'job_201310092016_36664'
Waiting for H2O cluster to come up...
H2O node 10.115.57.45:54321 requested flatfile
H2O node 10.115.5.25:54321 requested flatfile
Sending flatfiles to nodes...
[Sending flatfile to node 10.115.57.45:54321]
[Sending flatfile to node 10.115.5.25:54321]
H2O node 10.115.57.45:54321 reports H2O cluster size 1
H2O node 10.115.5.25:54321 reports H2O cluster size 1
08:51:53.179 main INFO WATER: ----- H2O started ----- 08:51:53.183 main INFO WATER: Build git branch: master 08:51:53.183 main INFO WATER: Build git hash: ff8e56c1192c7f79f5a436951165b695ca54116d 08:51:53.183 main INFO WATER: Build git describe: ff8e56c-dirty 08:51:53.183 main INFO WATER: Build project version: 1.7.0.99999 08:51:53.183 main INFO WATER: Built by: 'csevers' 08:51:53.183 main INFO WATER: Built on: 'Wed Oct 16 16:57:47 PDT 2013' 08:51:53.184 main INFO WATER: Java availableProcessors: 24 08:51:53.187 main INFO WATER: Java heap totalMemory: 0.96 gb 08:51:53.187 main INFO WATER: Java heap maxMemory: 0.96 gb 08:51:53.187 main INFO WATER: ICE root: '/hadoop/1/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0,/hadoop/2/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0,/hadoop/3/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0,/hadoop/4/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0,/hadoop/5/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0,/hadoop/6/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0,/hadoop/7/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0,/hadoop/8/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0,/hadoop/9/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0,/hadoop/10/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0,/hadoop/11/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0,/hadoop/12/scratch/taskTracker/csevers/jobcache/job_201310092016_36664/attempt_201310092016_36664_m_000000_0' 08:51:53.234 main INFO WATER: Internal communication uses port: 54322 + Listening for HTTP and REST traffic on http://10.115.5.25:54321/ EmbeddedH2OConfig: notifyAboutEmbeddedWebServerIpPort called (10.115.5.25, 54321) EmbeddedH2OConfig: fetchFlatfile called EmbeddedH2OConfig: fetchFlatfile returned ------------------------------------------------------------ 10.115.57.45:54321 10.115.5.25:54321 ------------------------------------------------------------ 08:51:53.404 main INFO WATER: H2O cloud name: 'H2O_61026' 08:51:53.405 main INFO WATER: (v1.7.0.99999) 'H2O_61026' on /10.115.5.25:54321, discovery address /237.114.157.191:60786 08:51:53.408 main INFO WATER: Cloud of size 1 formed [/10.115.5.25:54321] EmbeddedH2OConfig: notifyAboutCloudSize called (10.115.5.25, 54321, 1)
--
You received this message because you are subscribed to the Google Groups "H2O Users - h2ostream" group.
To unsubscribe from this group and stop receiving emails from it, send an email to h2ostream+...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
tanny@tanny-machine:~/binaries/h2o-3.10.0.8-hdp2.4$ hadoop jar h2odriver.jar -nodes 3 -mapperXmx 6g -output hdfsOutputDirDetermining driver host interface for mapper->driver callback...[Possible callback IP address: 192.168.1.10][Possible callback IP address: 127.0.0.1]Using mapper->driver callback IP address and port: 192.168.1.10:33020
(You can override these with -driverif and -driverport.)
Memory Settings:mapreduce.map.java.opts: -Xms6g -Xmx6g -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Dlog4j.defaultInitOverride=trueExtra memory percent: 10mapreduce.map.memory.mb: 675816/12/05 10:30:02 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable16/12/05 10:30:03 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id16/12/05 10:30:03 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=16/12/05 10:30:03 INFO mapreduce.JobSubmitter: number of splits:316/12/05 10:30:03 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1112487908_000116/12/05 10:30:03 INFO mapreduce.Job: The url to track the job: http://localhost:8080/Job name 'H2O_59084' submittedJobTracker job ID is 'job_local1112487908_0001'For YARN users, logs command is 'yarn logs -applicationId application_local1112487908_0001'
Waiting for H2O cluster to come up...
16/12/05 10:30:03 INFO mapred.LocalJobRunner: OutputCommitter set in config null16/12/05 10:30:03 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter16/12/05 10:30:03 INFO mapred.LocalJobRunner: Waiting for map tasks16/12/05 10:30:03 INFO mapred.LocalJobRunner: Starting task: attempt_local1112487908_0001_m_000000_016/12/05 10:30:03 INFO mapred.Task: Using ResourceCalculatorProcessTree : [ ]16/12/05 10:30:03 INFO mapred.MapTask: Processing split: water.hadoop.h2odriver$EmptySplit@7d27d9b0POST 0: Entered run16/12/05 10:30:03 INFO Configuration.deprecation: mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dirPOST 11: After setEmbeddedH2OConfig16/12/05 10:30:04 INFO reflections.Reflections: Reflections took 491 ms to scan 2 urls, producing 159 keys and 1047 values16/12/05 10:30:04 INFO reflections.Reflections: Reflections took 420 ms to scan 2 urls, producing 119 keys and 584 values16/12/05 10:30:06 INFO server.Server: jetty-8.y.z-SNAPSHOT16/12/05 10:30:06 INFO server.AbstractConnector: Started SocketC...@0.0.0.0:5432112-05 10:30:06.161 192.168.1.10:54321 13892 #cutor #0 INFO: ----- H2O started -----12-05 10:30:06.166 192.168.1.10:54321 13892 #cutor #0 INFO: Build git branch: rel-turing12-05 10:30:06.166 192.168.1.10:54321 13892 #cutor #0 INFO: Build git hash: 34b83da423d26dfbcc0b35c72714b31e80101d4912-05 10:30:06.166 192.168.1.10:54321 13892 #cutor #0 INFO: Build git describe: jenkins-rel-turing-812-05 10:30:06.166 192.168.1.10:54321 13892 #cutor #0 INFO: Build project version: 3.10.0.8 (latest version: 3.10.1.1)12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: Build age: 1 month and 24 days12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: Built by: 'jenkins'12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: Built on: '2016-10-10 13:45:37'12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: Processed H2O arguments: [-ice_root, /home/tanny/HADOOP_STAGING_DIR/mapred/local/localRunner//tanny/jobcache/job_local1112487908_0001/attempt_local1112487908_0001_m_000000_0, -hdfs_skip, -name, H2O_59084, -ga_hadoop_ver, Hadoop 2.6.0, -user_name, tanny]12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: Java availableProcessors: 812-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: Java heap totalMemory: 270.0 MB12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: Java heap maxMemory: 455.5 MB12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: Java version: Java 1.8.0_91 (from Oracle Corporation)12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: JVM launch parameters: [-Xmx1000m, -Djava.library.path=/usr/local/hadoop/lib, -Djava.net.preferIPv4Stack=true, -Dhadoop.log.dir=/usr/local/hadoop/logs, -Dhadoop.log.file=hadoop.log, -Dhadoop.home.dir=/usr/local/hadoop, -Dhadoop.id.str=tanny, -Dhadoop.root.logger=INFO,console, -Dhadoop.policy.file=hadoop-policy.xml, -Djava.net.preferIPv4Stack=true, -Xmx512m, -Dhadoop.security.logger=INFO,NullAppender]12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: OS version: Linux 3.13.0-24-generic (amd64)12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: Machine physical memory: 15.56 GB12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: X-h2o-cluster-id: 148091400400212-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: User name: 'tanny'12-05 10:30:06.167 192.168.1.10:54321 13892 #cutor #0 INFO: IPv6 stack selected: false12-05 10:30:06.168 192.168.1.10:54321 13892 #cutor #0 INFO: Possible IP Address: eth0 (eth0), 192.168.1.1012-05 10:30:06.168 192.168.1.10:54321 13892 #cutor #0 INFO: Possible IP Address: lo (lo), 127.0.0.112-05 10:30:06.168 192.168.1.10:54321 13892 #cutor #0 INFO: Internal communication uses port: 5432212-05 10:30:06.168 192.168.1.10:54321 13892 #cutor #0 INFO: Listening for HTTP and REST traffic on http://192.168.1.10:54321/EmbeddedH2OConfig: notifyAboutEmbeddedWebServerIpPort called (192.168.1.10, 54321)EmbeddedH2OConfig: fetchFlatfile calledH2O node 192.168.1.10:54321 requested flatfileERROR: Timed out waiting for H2O cluster to come up (120 seconds)ERROR: (Try specifying the -timeout option to increase the waiting time limit)Attempting to clean up hadoop job...16/12/05 10:32:03 WARN mapred.LocalJobRunner: job_local1112487908_0001java.lang.InterruptedExceptionat java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2014)at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2088)at java.util.concurrent.ThreadPoolExecutor.awaitTermination(ThreadPoolExecutor.java:1465)at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:449)at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)Killed.----- YARN cluster metrics -----Number of YARN worker nodes: 1----- Nodes -----Node: http://tanny-machine:8042 Rack: /default-rack, RUNNING, 0 containers used, 0.0 / 8.0 GB used, 0 / 8 vcores used----- Queues -----Queue name: defaultQueue state: RUNNINGCurrent capacity: 0.00Capacity: 1.00Maximum capacity: 1.00Application count: 0Queue 'default' approximate utilization: 0.0 / 8.0 GB used, 0 / 8 vcores used----------------------------------------------------------------------ERROR: Job memory request (19.8 GB) exceeds available YARN cluster memory (8.0 GB)WARNING: Job memory request (19.8 GB) exceeds queue available memory capacity (8.0 GB)ERROR: Only 1 out of the requested 3 worker containers were started due to YARN cluster resource limitations----------------------------------------------------------------------For YARN users, logs command is 'yarn logs -applicationId application_local1112487908_0001'16/12/05 10:32:08 ERROR hdfs.DFSClient: Failed to close inode 16628org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/tanny/hdfsOutputDir/_temporary/0/_temporary/attempt_local1112487908_0001_m_000000_0/part-m-00000 (inode 16628): File does not exist. Holder DFSClient_NONMAPREDUCE_940153155_1 does not have any open files.at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3516)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3604)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3574)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:700)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:526)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:422)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)at org.apache.hadoop.ipc.Client.call(Client.java:1468)at org.apache.hadoop.ipc.Client.call(Client.java:1399)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)at com.sun.proxy.$Proxy9.complete(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:443)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)at com.sun.proxy.$Proxy10.complete(Unknown Source)at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2250)at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2234)at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:938)at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:976)at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:899)at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2687)at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2704)at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
--
You received this message because you are subscribed to a topic in the Google Groups "H2O Open Source Scalable Machine Learning - h2ostream" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/h2ostream/zhL9_5jriTo/unsubscribe.
To unsubscribe from this group and all its topics, send an email to h2ostream+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
With Due Regards
Tanmay Saha,