> persistence. Teams, working on different plugins, can generate events
> and publish on RabbitMQ queue and their job is done.
But... this is the general idea regardless of the persistence backend,
is it not? Producers talking directly to the backend has never been on
the table from my point of view.
> b. If the event is not consumed, it will not be lost and will
> return to the queue, until it is consumed properly. There are few
> specific cases, I need to control such as mentioned
> in [2]http://tebros.com/2011/07/data-consistency-with-asynchronous-queu
> es-and-mongodb/
> Please share, if you have any other specific tool for persistance, in
> mind, that can serve the purpose for Eiffel’s events.
On Tuesday, February 20, 2018 at 05:43 CET,
"Azeem Ahmad (PhD candidatem LiU)" <aze...@gmail.com> wrote:
> As Ola Leifler mentioned earlier that many groups from different
> Universities/Industry with collaboration of Linkoping University,
> during this sprint, are working on different plugins for Eiffel
> event’s generation (i.e. JIRA, BitBucket or Garret). At the end,
> events, generated through these different plugins, will be integrated
> with Eiffel-vici tool for further evaluation. My job, in this sprint,
> is to provide solution for Eiffel persistence and integration with
> Eiffel-vici tool. I have explored the possibility for Redis, MongoDB,
> RabbitMQ and Neo4j as a candidate persistence solution and my findings
> are as below:
> 1. (Not Selected) Neo4j will create an overhead due to specific data
> storage schema and these overhead increases, when it comes to
> integrate Eiffel-vici with Neo4j. Neo4J, as an independent solution
> for viewing events, in graph, can be acceptable.
Could you elaborate on this overhead? I haven't looked into Neo4j in
detail but find its graph-based data model well suited for storing
Eiffel events given the kind of queries one would want to make.
<uri>/_search?q=meta.type:EiffelArtifactCreatedEventThis would return all EiffelArtifactCreatedEvents. Then particular convenience end-points, like id would simply be special cases of this:
<uri>/id/abcdef...would be short-hand for:
<uri>/_search?q=meta.id:abcdef...but would still return an array of matches.
With this setup, you can easily tack on upstream and downstream searches to the result via query parameters. First you fetch any matches, then you add anything along the specified upstream and downstream links to the result. To exemplify:
<uri>/_search?q=meta.type:EiffelArtifactCreatedEvent&dlTypes[]=IUT&ulTypes[]=ARTIFACT&ulTypes[]=ELEMENTThis would first fetch all EiffelArtifactCreatedEvents. Let's say this produces an array of two events:
[
EiffelArtifactCreatedEvent1,
EiffelArtifactCreatedEvent2
]
Then everything found along downstream IUT links would be added. Let's say one test case execution is found:
[
EiffelArtifactCreatedEvent1,
EiffelArtifactCreatedEvent2,
EiffelTestCaseTriggeredEvent1
]
... and so on and so forth for the specified ulTypes. Does that make any sense?
Perhaps we should have interfaces projects where we get together to define common interfaces of particular types of actors. We can have any number of implementations of persistence, potentially, but it would be a good idea to have a shared understanding of how to communicate with them.
Start date is "as soon as possible". Presumably this month. I'll send him your way on day one, promise :)
Perhaps I misinterpreted your original statement. What you're saying is that your database will only acknowledge (and therefore remove from the queue) messages once stored (and presumably you envision a durable queue), not that messages will be resent unless consumed by some other listener? If so, that makes a lot more sense, with the caveat that this is the default behavior you would expect from any consumer of event messages... which I guess is what threw me when I first read your initial post. Sorry for the mixup.
Regarding the interface of the persistence solution, Vici makes certain assumptions regarding that interface. Will you be following those assumptions?
Essentially, the basic functionality that's needed from an event database is to fetch events matching a filter, and any events linked from or linked to using specified link types. For instance,<uri>/_search?q=meta.type:EiffelArtifactCreatedEvent
This would return all EiffelArtifactCreatedEvents. Then particular convenience end-points, like id would simply be special cases of this:
<uri>/id/abcdef...
would be short-hand for:
<uri>/_search?q=meta.id:abcdef...
but would still return an array of matches.
With this setup, you can easily tack on upstream and downstream searches to the result via query parameters. First you fetch any matches, then you add anything along the specified upstream and downstream links to the result. To exemplify:
<uri>/_search?q=meta.type:EiffelArtifactCreatedEvent&dlTypes[]=IUT&ulTypes[]=ARTIFACT&ulTypes[]=ELEMENTThis would first fetch all EiffelArtifactCreatedEvents. Let's say this produces an array of two events:
[
EiffelArtifactCreatedEvent1,
EiffelArtifactCreatedEvent2
]
Then everything found along downstream IUT links would be added. Let's say one test case execution is found:
[
EiffelArtifactCreatedEvent1,
EiffelArtifactCreatedEvent2,
EiffelTestCaseTriggeredEvent1
]
... and so on and so forth for the specified ulTypes. Does that make any sense?
Perhaps we should have interfaces projects where we get together to define common interfaces of particular types of actors. We can have any number of implementations of persistence, potentially, but it would be a good idea to have a shared understanding of how to communicate with them.
Best regards,
Daniel
> ---- My plan is to use Eiffel-vici tool for event/graph visualization.
> In order to store eiffel events in Neo4J, we need to convert each event
> into specific graph model, supported by Neo4J.
Yes, that needs to be done. I was hoping one could simply map all fields
from the Eiffel payload to fields/attributes/whatever-they-are-called in
the Neo4j nodes.
> After saving these events, I need to convert them again to JSON to
> visualize in Eiffel-vici tool
Can't you just store the whole JSON blob to avoid conversion back and
forth?
> (in this particular case, Neo4J, only as persistence, is not a good
> option as long as I do not use its capabilities for querying and
> traversing graphs for visualization).
Sure, the graph query ability is the whole point of using Neo4j. For
plain JSON blob storage there are better options.
> Currently, there are some problems in Neo4J with respect to
> process/store bulk data of JSON as well.
Could you elaborate?
> After my last discussion with Ola Leifler, I am also considering Neo4J
> for persistence and visualization assuming a person must have technical
> capabilities to run queries in Neo4J if i do not find some better
> front-end. What do you suggest?
I view Neo4j as a storage backend only. If its built-in frontend can be
used for end-user queries or visualizations that's great but it's not my
expectation.
I am aware of some assumptions, made by Vici, but can you please refer me some reading to enhance my understanding about these assumptions?
Yes, this makes a lot of sense now and I can see that neo4j provides this types of event fetching in an efficient way. but I was wondering, do we assume that this event fetching (particularly in the case of neo4j) requires some technical/specific language capabilities to fetch/views events/information or, as you mentioned, we need to define interfaces for particular types of actor? what do you think? which actor to address first or do you think, we can create some sort of actor classification? Meanwhile, we can discuss actors and interfaces. do you think, we can generalize actors and interfaces?