Big Memory Max cache limitations

Skip to first unread message

Tim Mixell

May 9, 2018, 5:46:01 PM5/9/18
to terracotta-oss

I have struggled finding hard numbers on the limitations TC/BMM might have in terms of the overall number of caches supported. The product splash page cites "100s of caches," but is there a definitive number where a CacheManager (or CacheManagers) start to get... upset?

I would like to take advantage of the Query interface that BMM offers. I can go one of two ways: 

1) Aggregate our client data into a handful of caches (grouped by the entities we're caching), and include client-identifying information as a means to filter during search operations. This would result in fewer caches, but invalidating specific elements by client becomes more cumbersome. We would have approximately 1MM elements in the biggest cache going this route.

2) Isolate client data per cache (and by the entities we're caching). This would result in nearly 1000 caches out of the gate, but none would exceed 50k elements.

I've been able to successfully demo both approaches without any real issue in my development environment (using a copy of production data).

Anecdotally there are benefits and drawbacks to either approach, but I was hoping that someone with a greater knowledge of TC/BMM could offer some technical insight into which route may be more preferable.

Thanks in advance, and sorry if this isn't the right forum :-\
Reply all
Reply to author
0 new messages