It really depends on your availability/fault tolerance/scaling requirements and the complexity of your streams and processing/aggregations. Storm has a memory efficient algorithm for tracking complex tuples of streams. But Storm's algorithm isn't well suited for doing aggregations, which is why they had to build Trident on top of it, and Trident tends to rely significantly on external databases like Cassandra and ZooKeeper to ensure stream consistency which can often mean more work for the user. So, IMO Storm isn't great for doing anything even remotely math heavy - you should really look at Spark or Flink for that.
If your streams and processing within them are not too complex you can get away with Vert.x for its simplicity and flexibility. You can track message completion with request-reply, or you can implement Storm's algorithm fairly easily (it's just taking the XOR of random message IDs to track completion of related messages once a 64-bit tracker becomes 0).
But considering you used the word "analytics," I would strongly discourage going outside of the well established open source Apache projects like Spark, Storm, Samza, or Flink. Stream aggregations are a delicate process if you expect to run your processes on an unreliable system (which, presumably, you do) and still attain accurate results. Flink in particular seems to be the popular topic of conversation these days, but I can't really vouch for its usefulness as my experience is with Spark and Storm.