I have a project where I need to integrate data in real-time and manage the results in-memory. The aim is to provide a place where fresh, integrated information can be retrieved by and presented to end users quickly.
I looked at a data virtualization approach, but from what I've read, that is more about providing a unified abstraction through which to access a variety of datastores (correct me if I'm wrong).
So I'm looking at either Hazelcast or Spark as a potential place to park post-processed data to serve up, and I'm trying to first understand the particular problems they're trying to solve and which one would be better suited to my project.
Any insight, opinion, or experience is appreciated.
Sol