Index out of bounds exception while generating partitioned blocks

26 views
Skip to first unread message

Santhosh Swaminathan

unread,
Jul 6, 2015, 11:01:03 AM7/6/15
to cubert...@googlegroups.com


In a reducer, I am getting the following error:

java.lang.ArrayIndexOutOfBoundsException: -1024 at java.util.ArrayList.elementData(ArrayList.java:400) at java.util.ArrayList.get(ArrayList.java:413) at com.linkedin.cubert.memory.PagedByteArray.write(PagedByteArray.java:123) at com.linkedin.cubert.memory.PagedByteArrayOutputStream.write(PagedByteArrayOutputStream.java:54) at java.io.DataOutputStream.writeByte(DataOutputStream.java:153) at org.apache.pig.data.utils.SedesHelper.writeChararray(SedesHelper.java:65) at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:543) at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:435) at org.apache.pig.data.utils.SedesHelper.writeGenericTuple(SedesHelper.java:135) at org.apache.pig.data.BinInterSedes.writeTuple(BinInterSedes.java:613) at org.apache.pig.data.BinInterSedes.writeDatum(BinInterSedes.java:443) at org.apache.pig.data.BinSedesTuple.write(BinSedesTuple.java:41) at com.linkedin.cubert.io.DefaultTupleSerializer.serialize(DefaultTupleSerializer.java:41) at com.linkedin.cubert.io.DefaultTupleSerializer.serialize(DefaultTupleSerializer.java:28) at com.linkedin.cubert.utils.SerializedTupleStore.addToStore(SerializedTupleStore.java:118) at com.linkedin.cubert.block.CreateBlockOperator$StoredBlock.<init>(CreateBlockOperator.java:145) at com.linkedin.cubert.block.CreateBlockOperator.createBlock(CreateBlockOperator.java:532) at com.linkedin.cubert.block.CreateBlockOperator.next(CreateBlockOperator.java:488) at com.linkedin.cubert.plan.physical.PhaseExecutor.prepareOperatorChain(PhaseExecutor.java:261) at com.linkedin.cubert.plan.physical.PhaseExecutor.<init>(PhaseExecutor.java:111) at com.linkedin.cubert.plan.physical.CubertReducer.run(CubertReducer.java:68) at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:621) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:459) at org.apache.hadoop.mapred.Child$4.run(Child.java:282) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1151) at org.apache.hadoop.mapred.Child.main(Child.java:271)


JOB "Panel Fact BLOCKGEN"
        REDUCERS 500;
        MAP {
                input = LOAD "path/to/input" USING TEXT("schema": ...");
        }
 
        BLOCKGEN input  BY ROW 10000 PARTITIONED ON key1 SORTED ON key2;

        // ALWAYS store BLOCKGEN data using RUBIX file format!
        STORE input   INTO "path/to/output" USING RUBIX("overwrite": "true");
END
Reply all
Reply to author
Forward
0 new messages