Performance on mobile devices

110 views
Skip to first unread message

Pieter Hartog

unread,
Oct 13, 2012, 9:33:14 PM10/13/12
to andr...@googlegroups.com
I'm considering using a triple store to store data for an android app and I'm relatively new to Android development.
Being able to use Jena and ARQ sounds really interesting.
What is your experience with performance on android devices? I'm thinking about opening/loading an app and doing search queries. 
The local data store on the mobile device will be relatively small (< 10,000 triples) for the app I'm developing.
With the Jena and ARQ libraries loaded, will it consume a lot of memory?
Message has been deleted
Message has been deleted

lorenzo carrara

unread,
Oct 22, 2012, 2:02:43 PM10/22/12
to andr...@googlegroups.com
Hi Pieter,
androjena's cpu and memory consumption are mainly affected by two factors:

- whether you're loading the model into memory from a file OR you're using a triple store such as TDB
- whether you need inference or not

I've created a small android app that performs a quick benchmark that shows the impact of these factors, you can find the whole Eclipse project here.

The app does the following:
- it loads an OWL model from a file; the model contains ~12k triples, so it's well above your needs
- it does some queries against the loaded model WITHOUT inference
- it creates an ontology model with RDFS inference support from the initial model
- it does some queries with inference
- then, it trashes the in-memory models and creates a new one from a TDB store that is previously copied on the device; the store contains exactly the same triples from the model file
- it does the same queries (with and without inference) on the TDB-backed model

The benchmark results on a Samsung Galaxy S (768Mhz CPU, max 70M memory per app) are the following:

IN-MEMORY MODEL

TASK REQUIRED TIME MEMORY
start 10M
load model from owl file 20s 23M
query without inference 0 23M
query with rdfs inference 2s 26M

TDB-BACKED MODEL

TASK REQUIRED TIME MEMORY
start 10M
load model from TDB store 0.8s 17M
query without inference 0.6s 17M
query with rdfs inference 9s 23M

As you can see, using TDB cuts model loading times dramatically. The real bottleneck is inference; that's because jena's built-in rule-based reasoner is designed to be very extensible and customisable, but definitely not fast. You can cope with this in two ways:
- if your model is static (i.e. your application doesn't need to add triples to the model at runtime), then you can pre-classify it using a tool such as Protegé coupled with a reasoner such as Pellet. Once the model is classified, you can load it into a TDB store and use that store in your app without inference.
- if you absolutely need inference at runtime and the built-in reasoners (RDFS, OWL-MINI, OWL-MICRO etc.) are too slow, you can implement a custom ruleset for the generic rule-based reasoner and use that instead of the built-in ones. We did this in a project where we only needed a restricted subset of OWL inference rules: we used the generic rule reasoner in full-backward mode with a custom ruleset, and obtained pretty good results w.r.t. the built-in reasoners.

Feel free to ask if you need more info about the reasoners stuff or if you have any probelm using androjena in your project,
hope this helped!
bye
lorenzo

asma.mi...@gmail.com

unread,
Nov 13, 2014, 1:51:55 AM11/13/14
to andr...@googlegroups.com
Hi Lorenzo,
Would you tell what reasoner use withe androjena api and how to import the reasoner? thinks.


On Sunday, October 14, 2012 3:33:14 AM UTC+2, Pieter Hartog wrote:
Reply all
Reply to author
Forward
0 new messages