So I went ahead and implemented a caching strategy in my application.
Turns out, there is no significant speed improvement between serializing an array of ActiveRecord objects, and fetching the serialized attributes from a cache store such as Redis. Of course this is only the case when you do things right, and eager load all ActiveRecord models, before serializing.
What actually is more interesting is the huge speed up I measured when using caching to figure out which records to load from the database.
I understand that the following example is only applicable for my use case, and that's probably the reason why it is so hard to integrate a good caching solution in AMS.
Let's take the Facebook example, where every user has his own timeline with a list of posts, and every posts has one or more comments. We are trying to speed up the Posts#index action.
Instead of wrapping each call to AM::Serializer#serializable hash in a Rails.cach.fetch {} block. This approach is more successful:
1) One database query to load posts for the current user in the correct order, but only select attributes needed to compute the cache key (id, updated_at). This is fast.
2) Do a cache.read_multi for with the cache keys for all posts. One roundtrip to Redis or Memcached.
3) Find out for which posts, we have a cache miss. Query Post.where(id: missing_ids) and join with all related tables (comments, users, etc). Forget ordering, ordering is expensive.
4) Serialize these ActiveRecord objects and store in cache.
5) Use result of original query to return serialized objects in correct order.
Please let me know what you all think of this approach.
- Matthijs