My name is Fredah and studying at Oxford and plan on using your SPARQL engine for my project implementation. I’m impressed by the tremendous work you have put in to make this engine a success however I did notice that the underlying infrastructure and compression technique used are encapsulated. I need to fully understand how the data is processed from start to finish especially with regards to the compression. Are there by any chance papers that have been written that cover the compression and decompression used in your engine or is it possible to refer me to someone who may be able to explain it to me?
Also, is compression default or is turned on and off depending on the data load of the system? I was also wondering how you store the data internally. As in, what format is the data stored? Is it an internally created representation or one of the standard RDF representations?
I would really appreciate your assistance in answering these questions and look forward to hearing from you soon.