For awhile I've been telling people to look at Datomic Free because it provides such a nice way to structure and query your data, even if you don't use the full Datomic solution.
It looks like this library will give us the same thing on the Clojurescript side, and that's pretty cool. I can't wait to try it out.
I'm really enjoying Om for client side work but totally love the idea of being able to do database-like queries over the application state as you describe.
--
Note that posts from new members are moderated - please be patient with your first post.
---
You received this message because you are subscribed to the Google Groups "ClojureScript" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojurescrip...@googlegroups.com.
To post to this group, send email to clojur...@googlegroups.com.
Visit this group at http://groups.google.com/group/clojurescript.
A little wrapper might do the trick. What if you mediate all component queries to the database, and cache a list of returned entities. Anytime any of those entities change, the component should be rendered (you could even implement more advanced update to the query result instead of triggering a new query). Anytime a new entity is added to the database, check if it would be included in the result of any component query, and mark those components as dirty.
This example uses Reagent (just because I wanted to play with Reagent). The bind function binds a DataScript query to a Reagent atom. When a tx-report is received, the query is run against the tx-data, and the atom is only updated with the full query results (against the new version of the db) if the query against tx-data is non-empty. Similarly, undo reverses the db actions (add/retract) and applies a new transaction rather than simply reverting to the previous db value.
Probably heavy-handed, but I wanted to see what it looked like to deal directly with datoms.
Your app sounds very similar to what I am working on right now and was thinking of using Datascript for. I have ~10k items (construction materials) I need to present in a list with dynamic filtering (category, size/gauge, etc.). Running datalog queries on the client instead of the server would be a big win.
I'm worried now about performance, since Nikita says he had much smaller data sets in mind. Have you progressed enough yet to determine if performance is a problem with these large data sets?
Nikita, we're rewriting https://www.cosponsor.gov/ in a full clojure stack. Hoping to be able to open source the whole project once it's done. We've been able to get the folks in charge (in congress) excited about immutability and clojure as a new paradigm and they seem on board so far.
I'm dynamically sorting a filtering 5000+ bills. Currently supporting search on keyup (Om is crazy fast!), category and status filtering, as well as dynamic sorting.
Mike, I'm hoping to play with integration today or tomorrow, so I'll get back to you. I can't imagine performance will be worse than using my current (-> filter-by-topic filter-by-search filter-by-status sort) pipeline. If anything I'd expect it to be much better, and considerably more fluent.
I'm thinking it's possible to do better than simply querying tx-data if we hook into DataScript internals. Doing it through tx-report means every subscriber has to run the query. But I think we should be able to do something like "listen-query", which analyzes the query to see which parts of the index are relevant. After each transaction, a single analysis could be done of tx-data to see which parts of the indexes got modified, and notify accordingly.
Thoughts?
I'll be happy to work with you to improve performance as needed. Anything from providing metrics to giving you access to the code or my data sets to test against.
No facilities to persist, transfer over the wire or sync DB with the server
This would retain the entire history (like Datomic), which may not be what you want. But you could easily build on this to suit many use cases. You could create separate files for different entities, and/or choose which data to persist the entire history vs. just the current state. Append-only is easy, but once you start getting fancy there are a ton of edge cases to worry about. In that case, you could look at using a logging framework to manage the gory details.
You could even embed a database to back everything up, there are about a billion choices in npm. Even if you choose to use an embedded db on the node side, tx-listen is probably the easiest place to hook in. Another route would be to add some metadata to your entities in Datascript to make it easier to query what you want to persist.
Browsers don't need access to the entire database, just the relevant data for a particular client. What you can do is to connect to the backend using websockets. When you first connect, you ask for all relevant data for this particular user. After that, the backend can push updates for this particular user, to keep the client up to date.
If you can't use Om Next you could do something similar in your own code. David Nolan has put in a lot of thinking around this and seems to have hit a sweet spot in this area. Replicating all the features/aspects he has designed in might be a challenge.
Alan
--
Note that posts from new members are moderated - please be patient with your first post.
---
You received this message because you are subscribed to a topic in the Google Groups "ClojureScript" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/clojurescript/o0W57ptvPc8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to clojurescrip...@googlegroups.com.
You received this message because you are subscribed to the Google Groups "ClojureScript" group.
To unsubscribe from this group and stop receiving emails from it, send an email to clojurescrip...@googlegroups.com.
In theory, yes, queries should work on any dataset provided it implements some basic protocol. In practice, I have to build an example of that to see what pieces are missing at the moment.