Zeno performance with large data sets

51 views
Skip to first unread message

Jon Stockdill

unread,
Mar 16, 2015, 1:49:17 PM3/16/15
to netfli...@googlegroups.com
Is there a maximum size limit for Zeno to manage effectively?  Is the bottleneck the memory in the JVM or does the state engine itself have performance issues when it becomes large?

--jon

Drew Koszewnik

unread,
Mar 16, 2015, 5:55:09 PM3/16/15
to netfli...@googlegroups.com
Hi Jon,

There are some hard limits (based on the way the ByteArrayOrdinalMap works).  

Any given type should have no more than 134,217,727 ordinals (2^27-1), and any given type should not require more than 68,719,476,735 bytes (2^36-1) to represent in compressed form.

Beyond those limits, performance doesn't degrade as your dataset gets larger, but the performance of a data producer cycle grows linearly with your dataset.  As for the consumer cycle, performance of a delta on a client is linear with the number of changes for the state transition, and performance of a "double snapshot" grows linearly with your data set.  Heap size is likely to be your limiting factor.

Out of curiosity, what size did you have in mind?

Thanks,
Drew.

Jon Stockdill

unread,
Mar 16, 2015, 9:12:07 PM3/16/15
to netfli...@googlegroups.com
Thanks for the clear answer.  My instinct was heap size would be the limiting factor, but it is helpful and cool to learn the internals of Zeno.

We currently are using zeno for about 600MB of data and it is performing really well.  We are expecting about 1-1.5GB of VOD data.

It seems like we are well within those bounds.

Thanks again,

--jon
Reply all
Reply to author
Forward
0 new messages