Pelops: iteratveOverColumnsFromRows

28 views
Skip to first unread message

SH

unread,
Jan 10, 2012, 3:59:47 PM1/10/12
to Scale 7 - Libraries and systems for scalable computing
I'm trying to understand the expected behavior of the iterator
generated by iterateColumnsFromRows. Am I correct in assuming that
when I pass it a batch size, the batch size does not restrict the size
of the result set, but rather determines how many keys Cassandra loads
into memory at a time?

For instance, executing this block of code
selector.iterateColumnsFromRows("test", key, 100,
ConsistencyLevel.ONE);
returns all the row keys in the column family instead of just
returning 100.

Thanks!

SH

Dominic Williams

unread,
Jan 10, 2012, 4:11:44 PM1/10/12
to sca...@googlegroups.com
Hi I haven't used this personally but looking at the code - the batch size is the number of rows (which contain the requested columns) read on each iteration. 

The columns that will be read for each row are specified by the slice parameter

Hope this helps

Dan Washusen

unread,
Jan 10, 2012, 9:29:40 PM1/10/12
to sca...@googlegroups.com
That's correct.  As per javadoc: "The maximum number of columns that can be retrieved per invocation to getColumnsFromRows(String, List, boolean, ConsistencyLevel) and dictates the number of rows to be held in memory at any one time".

I'd suggest a value much larger than 100 (unless you have reallllly wide rows)… 

-- 
Dan Washusen
Make big files fly
Reply all
Reply to author
Forward
0 new messages