I'm still coming up to speed on vertx myself, but with something
standard like caching, encapsulate it in your own provider so you can
have a "bake-off" between implementations. Guava's cache plumbing
sounds like it hits most of your requirements though [0].
However, I think you should strike the sendfile requirements off your
feature list. That is orthogonal to cache (de)serialization as it's
only a price you pay once to load from disk and really has nothing to
do with writing the cached item to the response thereafter.
With the event bus question, do it in the initial event loop until you
have an architectural reason to do otherwise (or a deeper
understanding as to why not - perhaps I soon will as well). From a
quick reading of the code, the event bus is basically a ByteBuffer
slapped onto an NIO socket. Not much happening there so I don't see
how it is "slow" outside of potentially mis-matching config settings
for processes on the different verticles. Of course you add a hop and
thus some latency, but that is the tradeoff for decoupling IME.
[0]
http://code.google.com/p/guava-libraries/wiki/CachesExplained
> --
> You received this message because you are subscribed to the Google Groups
> "vert.x" group.
> To view this discussion on the web, visit
>
https://groups.google.com/d/msg/vertx/-/KiDsbdY-3ScJ.
> To post to this group, send an email to
ve...@googlegroups.com.
> To unsubscribe from this group, send email to
>
vertx+un...@googlegroups.com.
> For more options, visit this group at
>
http://groups.google.com/group/vertx?hl=en-GB.