Hi James:
I would definitely welcome other responses in this thread, but I wanted to point out that we have a partial write up of possible database backends for Brick here: https://docs.brickschema.org/software/database.html . It is definitely incomplete, but can be a helpful starting point. GraphDB is another free option that looks to have some great features. For development purposes, I've found that putting the Brick model in a file and loading it in-memory with RDFlib is helpful. RDFlib also has support for other disk-backed storage (https://rdflib.readthedocs.io/en/stable/persistence.html)
Best,
Gabe
--
You received this message because you are subscribed to the Google Groups "Brick User Forum (Unified Building Metadata Schema)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to brickschema...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/brickschema/CAKz7d%2By415EiC8c7Ns2Y5Nq37kMvasqNUTsMECrHPJqbkKx5FA%40mail.gmail.com.
Hi Peter:
I made a video recently that shows how to use Postgres for storing timeseries data in Brick: https://www.youtube.com/watch?v=kZYNXoiM8gk . There is a GitHub repository linked below the video that contains some sample code.
You do not need to use pymortar to use Brick. The video above shows a relatively simple data management model. pymortar >= 2.0.0 is for the new Mortar API (actively developed at https://github.com/gtfierro/mortar) but it is not quite ready for "prime time". The older Mortar API is being deprecated.
Best,
Gabe
To view this discussion on the web visit https://groups.google.com/d/msgid/brickschema/c7e12428-a202-4bd7-9fb6-2ddc5bed9f56n%40googlegroups.com.
Hi Peter:
The short answer is "yes" but it is a little awkward, and something that we are working on.
In terms of storing RDF triples in a Postgres database,
the RDFlib package, which the py-brickschema package is
built on, supports storage in a SQL database:
https://github.com/RDFLib/rdflib-sqlalchemy.
The new mortar backend, which I've been slowly working on, continually exports triples to a reasoner, and then exposes those triples through a SPARQL endpoint. Not very polished yet, but the basic functionality is there: https://github.com/gtfierro/mortar . The specific mechanism of how this works is described in Sections 7.2 and 7.3 of my PhD thesis: https://home.gtf.fyi/papers/fierro-dissertation.pdf
Another approach might be to develop a UDF which performs inference ("compilation") on the graph and exposes the results as a (likely materialized) view, which can be queried through another UDF. I'm not sure it is worth it to do all the processing in postgres --- probably a better approach is something more similar to what I'm doing in Mortar, where the actual SPARQL processing is performed in a system designed for that purpose.
Best,
Gabe
To view this discussion on the web visit https://groups.google.com/d/msgid/brickschema/2c0b8192-12ea-42f3-bb45-66020530c48bn%40googlegroups.com.