'Cannot allocate memory' error - insufficient memory for the Java Runtime Environment to continue.

468 views
Skip to first unread message

Shree

unread,
Jun 21, 2015, 7:28:41 AM6/21/15
to chenn...@googlegroups.com
Hi,
I have been trying to process 1000images using hipi - 134MB, trying the Calculate averages tutorial as given in website. HIB creation was fast, but i cannot run the mapreduce. It says There is insufficient memory for the Java Runtime Environment to continue. I have already updated my mapred-site.xml as -
 <property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx8192m</value>
  </property>

15/06/20 07:41:39 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/06/20 07:41:40 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/06/20 07:41:40 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
15/06/20 07:41:40 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
15/06/20 07:41:40 INFO input.FileInputFormat: Total input paths to process : 1
Spawned 2map tasks
15/06/20 07:41:41 INFO mapreduce.JobSubmitter: number of splits:2
15/06/20 07:41:41 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local2138318905_0001
15/06/20 07:41:41 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
15/06/20 07:41:41 INFO mapreduce.Job: Running job: job_local2138318905_0001
15/06/20 07:41:41 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/06/20 07:41:41 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15/06/20 07:41:41 INFO mapred.LocalJobRunner: Waiting for map tasks
15/06/20 07:41:41 INFO mapred.LocalJobRunner: Starting task: attempt_local2138318905_0001_m_000000_0
15/06/20 07:41:41 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
15/06/20 07:41:41 INFO mapred.MapTask: Processing split: hdfs://localhost:54310/user/ubuntu/images1k.hib.dat:0+134272033
15/06/20 07:41:42 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
15/06/20 07:41:42 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
15/06/20 07:41:42 INFO mapred.MapTask: soft limit at 83886080
15/06/20 07:41:42 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
15/06/20 07:41:42 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
15/06/20 07:41:42 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
Record starts at byte 0 and ends at byte 134272032
15/06/20 07:41:42 INFO mapreduce.Job: Job job_local2138318905_0001 running in uber mode : false
15/06/20 07:41:42 INFO mapreduce.Job:  map 0% reduce 0%
15/06/20 07:41:47 INFO mapred.LocalJobRunner: map > map
15/06/20 07:41:48 INFO mapreduce.Job:  map 9% reduce 0%
15/06/20 07:41:50 INFO mapred.LocalJobRunner: map > map
15/06/20 07:41:51 INFO mapreduce.Job:  map 15% reduce 0%
15/06/20 07:41:53 INFO mapred.LocalJobRunner: map > map
15/06/20 07:41:54 INFO mapreduce.Job:  map 20% reduce 0%
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000ef867000, 102338560, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 102338560 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/ubuntu/hipi-release/hs_err_pid10440.log

Senthil Kumar

unread,
Jun 23, 2015, 10:04:46 PM6/23/15
to chenn...@googlegroups.com, shreena...@gmail.com
Hi Shree,

A quick guess !!
How big is your each output(every context.write) from map?
it seems io.sort.mb still 100 MB. Can you change it to more MB and run the same?

Thanks
Senthil

sudhakar kurakula

unread,
Jun 23, 2015, 11:47:47 PM6/23/15
to chenn...@googlegroups.com, shreena...@gmail.com
Hi,

Try with this property....
<property>
    <name>mapred.child.java.opts</name>
    <value>-Xmx1024m</value>   or -xmx1400M
  </property>

Regards,
KSR

--
You received this message because you are subscribed to the Google Groups "Hadoop Users Group (HUG) Chennai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to chennaihug+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Shree

unread,
Jun 24, 2015, 8:23:37 AM6/24/15
to chenn...@googlegroups.com
@Senthil - I am calculating average of r,g,b in per image in the mapper task, and outputting the averaged value.(The pair R,G,B). All these are then  averaged in the reducer to get average of RGB of all images in the HIB.
I tried setting my mapreduce.io.sort.mb to 200 and running the program. Unfortunately, the program aborts way early then it was earlier. Here is the output -

15/06/24 11:30:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/06/24 11:30:05 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted avg1k-hipi-op
15/06/24 11:30:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/06/24 11:30:08 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
15/06/24 11:30:08 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
15/06/24 11:30:08 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
15/06/24 11:30:08 INFO input.FileInputFormat: Total input paths to process : 1
Spawned 2map tasks
15/06/24 11:30:08 INFO mapreduce.JobSubmitter: number of splits:2
15/06/24 11:30:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local37413710_0001
15/06/24 11:30:10 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
15/06/24 11:30:10 INFO mapreduce.Job: Running job: job_local37413710_0001
15/06/24 11:30:10 INFO mapred.LocalJobRunner: OutputCommitter set in config null
15/06/24 11:30:10 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
15/06/24 11:30:10 INFO mapred.LocalJobRunner: Waiting for map tasks
15/06/24 11:30:10 INFO mapred.LocalJobRunner: Starting task: attempt_local37413710_0001_m_000000_0
15/06/24 11:30:10 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
15/06/24 11:30:10 INFO mapred.MapTask: Processing split: hdfs://localhost:54310/user/ubuntu/images1k.hib.dat:0+134272033
15/06/24 11:30:10 INFO mapred.MapTask: (EQUATOR) 0 kvi 52428796(209715184)
15/06/24 11:30:10 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 200
15/06/24 11:30:10 INFO mapred.MapTask: soft limit at 167772160
15/06/24 11:30:10 INFO mapred.MapTask: bufstart = 0; bufvoid = 209715200
15/06/24 11:30:10 INFO mapred.MapTask: kvstart = 52428796; length = 13107200
15/06/24 11:30:10 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer

Record starts at byte 0 and ends at byte 134272032
15/06/24 11:30:11 INFO mapreduce.Job: Job job_local37413710_0001 running in uber mode : false
15/06/24 11:30:11 INFO mapreduce.Job:  map 0% reduce 0%
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000ed78b000, 136794112, 0) failed; error='Cannot allocate memory' (errno=12)

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 136794112 bytes for committing reserved memory.

# An error report file with more information is saved as:
# /home/ubuntu/hs_err_pid5158.log

Shree

unread,
Jun 24, 2015, 8:25:04 AM6/24/15
to chenn...@googlegroups.com
@KSR, I have already mentioned that I have done that with 512, 1024, 2048, 4096, etc.


On Sunday, June 21, 2015 at 4:58:41 PM UTC+5:30, Shree wrote:
Reply all
Reply to author
Forward
0 new messages