--
---
You received this message because you are subscribed to the Google Groups "rhipe" group.
To unsubscribe from this group and stop receiving emails from it, send an email to rhipe+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Hi Saptarsh
Thanks for the info. Then I found hipe_reduce_bytes_read in
https://github.com/saptarshiguha/RHIPE/blob/master/src/main/C/reducer.cc
but with no default. So I am assuming the length of reduce.values is fully controlled by
rhipe_reduce_buff_size, which default is 100, am I right?
Thanks
Xiaosu
Thanks. So this hipe_map/reduce_bytes_read is actually controlling how large of
input data will be loaded in memory at one time, right?
Thanks
Xiaosu
Also I noticed from the experiment that, the rhipe_map_bytes_read should be set less than block size, or the block size should be set larger
than map_bytes_read. For example, if the block size is 128M, and map_bytes_read is 150M, each key-value pair is 1M. A mapper is handling
more data than a block. So some records have been copied to the mapper, which is bad.
Did I understand this correctly?
Thanks
Xiaosu