Hi,
I'm using mongodb as container for a set of help html documents
that are being served by a (cherrypy)-based REST server using pymongo.
As the database is relatively small (~50 documents) and changes seldomly
(only when someone edits through my web server frontend, which runs on the
same machine as the REST server), I wonder whether I could speed up
performance by caching the documents. Three options:
a) I could either cache locally (in the Python process) and then listen for, say, an
invalidation trigger being sent by my web server frontend (e.g. through a UNIX
socket or a signal or whatever means).
b) I could cache via memcached and have my web server frontend invalidate
by removing or updating said documents when something changes.
c) I could not do anything and rely on caching through a working set being
automatically cached in memory by mongodb (is it?).
What would you think is a good strategy?
Thanks,
Mickey.