Database for Building Metadata?

78 views
Skip to first unread message

James Kempf

unread,
Jun 18, 2021, 1:39:02 PM6/18/21
to brick...@googlegroups.com
Hi all,

I just joined the group, and was interested to find out what people are using for their building metadata database? For time series data, if I understand Gabe's video from June 9 correctly, Postgres will work. Do people use Apache Jena for building metadata or something else?

Thanx!

              jak

Gabe Fierro

unread,
Jun 18, 2021, 1:44:23 PM6/18/21
to brick...@googlegroups.com

Hi James:

I would definitely welcome other responses in this thread, but I wanted to point out that we have a partial write up of possible database backends for Brick here: https://docs.brickschema.org/software/database.html . It is definitely incomplete, but can be a helpful starting point. GraphDB is another free option that looks to have some great features. For development purposes, I've found that putting the Brick model in a file and loading it in-memory with RDFlib is helpful. RDFlib also has support for other disk-backed storage (https://rdflib.readthedocs.io/en/stable/persistence.html)

Best,

Gabe

--
You received this message because you are subscribed to the Google Groups "Brick User Forum (Unified Building Metadata Schema)" group.
To unsubscribe from this group and stop receiving emails from it, send an email to brickschema...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/brickschema/CAKz7d%2By415EiC8c7Ns2Y5Nq37kMvasqNUTsMECrHPJqbkKx5FA%40mail.gmail.com.

peter yang

unread,
Jun 21, 2021, 2:40:45 AM6/21/21
to Brick User Forum (Unified Building Metadata Schema)
Hi All,

I am playing with Brick.  I have two questions and hope someone can help or point me to the correct direction:

  1. Any writeup about how to use PostgreSQL with Brick? For example, which postgreSQL plugin/driver are required. Please list the required software if PostgreSQL is used .
  2. I tried to use pymortar 2.0.2, I got this error : 
    • attributeerror: 'dict' object has no attribute 'strip' when I run below code
    • client = pymortar.Client({  'mortar_address': 'api.mortardata.org', # do not change 'username': "YOUR USERNAME HERE", # <------------- CHANGE THIS 'password': "YOUR PASSWORD HERE", # <------------- CHANGE THIS })
But pymortar 1.0.8 works very well.

Thanks
Peter

Gabe Fierro

unread,
Jun 21, 2021, 1:02:39 PM6/21/21
to brick...@googlegroups.com

Hi Peter:

I made a video recently that shows how to use Postgres for storing timeseries data in Brick: https://www.youtube.com/watch?v=kZYNXoiM8gk . There is a GitHub repository linked below the video that contains some sample code.

You do not need to use pymortar to use Brick. The video above shows a relatively simple data management model. pymortar >= 2.0.0 is for the new Mortar API (actively developed at https://github.com/gtfierro/mortar) but it is not quite ready for "prime time". The older Mortar API is being deprecated.

Best,

Gabe

peter yang

unread,
Jun 22, 2021, 12:34:46 AM6/22/21
to Brick User Forum (Unified Building Metadata Schema)
Hi Gabe,

Thanks for the detail video.   In the video, you compile bldg2.ttl to bldg2-compiled.ttl and then query on the compiled model.

My question is whether it is possible to  store building models' "Subject Predicate Object" Turtles (each one like a column)  in PostgreSQL database table?  And then we can query this table with some PostgreSQL library?

i want to avoid compilation and hope the model/tool be more dynamic.

Thanks very much!

Peter

Gabe Fierro

unread,
Jun 24, 2021, 3:45:31 PM6/24/21
to brick...@googlegroups.com

Hi Peter:

The short answer is "yes" but it is a little awkward, and something that we are working on.

In terms of storing RDF triples in a Postgres database, the RDFlib package, which the py-brickschema package is built on, supports storage in a SQL database: https://github.com/RDFLib/rdflib-sqlalchemy.

The new mortar backend, which I've been slowly working on, continually exports triples to a reasoner, and then exposes those triples through a SPARQL endpoint. Not very polished yet, but the basic functionality is there: https://github.com/gtfierro/mortar . The specific mechanism of how this works is described in Sections 7.2 and 7.3 of my PhD thesis: https://home.gtf.fyi/papers/fierro-dissertation.pdf

Another approach might be to develop a UDF which performs inference ("compilation") on the graph and exposes the results as a (likely materialized) view, which can be queried through another UDF. I'm not sure it is worth it to do all the processing in postgres --- probably a better approach is something more similar to what I'm doing in Mortar, where the actual SPARQL processing is performed in a system designed for that purpose.

Best,

Gabe

Reply all
Reply to author
Forward
0 new messages