Absolutely the most useful feedback is when people describe a use case or behaviour that does not work very well for them. This type of feedback is the most important and I encourage you and others to provide it. The caveat is that the feedback has to have enough description of the use case and problem to provide insight. No detail = no insight = no improvement.
> Ebeans assumes everything is lazy loaded ...
This statement is not clear to me. Can you describe this better? At a guess I think you mean Ebean does not honour FetchType.Eager in annotations? That might be an interesting discussion as to why that brings all manor of death and destruction to scalability :) ... but more detail would be good to explain what you mean here. In terms of Eager fetch in annotations with EbeanORM the expectation is that users specify in their query the desired object graph paths and properties to fetch. I have come across the cases of simple apps where there is literally one 1 use case for a given entity bean and in that case the annotation based FetchType.Eager would have been nice - in this case Ebean can detect if the query has no detail (no fetch paths or properties) and could apply the annotations as a 'default fetch plan' so certainly I have pondered doing that.
... but I suspect I'm missing your point as Ebean does very well with it's fetch path / fetch properties approach (avoiding the problems that the FetchType annotations introduce) and I'd say Eager fetching of complex graphs is something Ebean does better than any other ORM.
> ... Level to Player and then recursively loading the entire model into memory in thousands of SQL calls. The permutations of SQL calls that can be executed in a large model are incredible and caching them at an L2 cache is not going to be effective in stopping them
Certainly I have experienced bad L2 cache and N+1 query scenarios with other ORMs and NoSql datastores. However generally speaking with EbeanORM 4.x we do very well here. You should get very good cache hits on lazy loading, the model is loaded on demand (should not be incredibly large), batch loading is easy to use (and on by default now) to mitigate N + 1 queries and make loading complex graphs efficient. Note that my second priority after documentation is providing a performance monitoring dashboard so we can all have good visibility on this for our applications.
The weaknesses in the current L2 cache are the fast invalidation of L2 query caches and the lack of links between query caches and content caches (JSON, HTML etc). I have a plan for both of those.
In the near future lazy loading could hit an "L3" cache which is initially going to be ElasticSearch but later could be a data grid solution. That is, the L2 cache can also be viewed as a set of behaviours and that those behaviours can be translated into 'local in memory cache hits', 'remote memory cache hits / remote ElasticSearch/Datagrid query hits' or DB query hits. The predicates used in lazy loading are pretty simple so translating those into say ElasticSearch queries or data grid queries can be straight forward.
So yes, I'm bullish that EbeanORM can do very well here in terms of building object graphs from L2 and L3 caches.
I'm not suggesting that. Certainly an online gaming application is going to be vastly different to a stackoverflow type website.
What I will say is that 'Hibernates problems are not Ebean's problems'. LazyInitialisationException is one example but there are many where people mistakenly translate Hibernate issues with ORM in general (or EbeanORM in particular). In this sense I often find people who are going down the wrong path based on some assumption they made which in turn was based on some Hibernate or JPA experience.
Improvements to documenation / video series should hopefully reduce that.
> ORM had cooperative querying architecture
That might be close to what I'll be adding to EbeanORM in what will be it's L3 cache. Specifically the first implementation will support ElasticSearch. That is, a ORM Query or lazy load that misses on the local in memory L2 cache then get translated into a ElasticSearch query. I built the majority of this feature some time ago but it was designed to use raw Lucene - I then meet ElasticSearch and pulled all that effort out with a few to integrating ElasticSearch. Note that this support depends on the specific types of predicates used (and that those predicates can be translated appropriately). For the lazy loading cases the query predicates easily translate so there is a big/easy win in that case - more work/harder for the query cache miss cases.
The initial goal is not to provide transactional read consistency guarantee's for this but instead 'most recent / best effort' results. It is likely that the ORM query API will also be extended at the same time to allow adding ElasticSearch/text search specific predicates and for these queries the intention is to never hit the DB - only ElasticSearch.
In terms of Hazelcast or other data grids it seems reasonable that many ORM queries could be translated reasonably well into datagrid queries (that the predicates match well) but that requires time/investigation. It could also be that you'd want to go the other way with the datagrid driving everything.
This L3 cache might still not fit your online gaming database requirements though.