Hi Pieter,
androjena's cpu and memory consumption are mainly affected by two factors:
- whether you're loading the model into memory from a file OR you're using a triple store such as TDB
- whether you need inference or not
I've created a small android app that performs a quick benchmark that shows the impact of these factors, you can find the whole Eclipse project
here.
The app does the following:
- it loads an OWL model from a file; the model contains ~12k triples, so it's well above your needs
- it does some queries against the loaded model WITHOUT inference
- it creates an ontology model with RDFS inference support from the initial model
- it does some queries with inference
- then, it trashes the in-memory models and creates a new one from a TDB store that is previously copied on the device; the store contains exactly the same triples from the model file
- it does the same queries (with and without inference) on the TDB-backed model
The benchmark results on a Samsung Galaxy S (768Mhz CPU, max 70M memory per app) are the following:
IN-MEMORY MODEL
TASK REQUIRED TIME MEMORY
start 10M
load model from owl file 20s 23M
query without inference 0 23M
query with rdfs inference 2s 26M
TDB-BACKED MODEL
TASK REQUIRED TIME MEMORY
start 10M
load model from TDB store 0.8s 17M
query without inference 0.6s 17M
query with rdfs inference 9s 23M
As you can see, using TDB cuts model loading times dramatically. The real bottleneck is inference; that's because jena's built-in rule-based reasoner is designed to be very extensible and customisable, but definitely not fast. You can cope with this in two ways:
- if your model is static (i.e. your application doesn't need to add triples to the model at runtime), then you can pre-classify it using a tool such as Protegé coupled with a reasoner such as Pellet. Once the model is classified, you can load it into a TDB store and use that store in your app without inference.
- if you absolutely need inference at runtime and the built-in reasoners (RDFS, OWL-MINI, OWL-MICRO etc.) are too slow, you can implement a custom ruleset for the generic rule-based reasoner and use that instead of the built-in ones. We did this in a project where we only needed a restricted subset of OWL inference rules: we used the generic rule reasoner in full-backward mode with a custom ruleset, and obtained pretty good results w.r.t. the built-in reasoners.
Feel free to ask if you need more info about the reasoners stuff or if you have any probelm using androjena in your project,
hope this helped!
bye
lorenzo