On Mon, 14 Nov 2022 at 20:29, Jim Balhoff <
bal...@gmail.com> wrote:
>
> On Nov 14, 2022, at 3:21 PM, Ignazio Palmisano <
ipalmisan...@gmail.com> wrote:
>
> On Mon, 14 Nov 2022 at 02:22, Jim Balhoff <
bal...@gmail.com> wrote:
>
>
> Hi,
>
> I’m trying to run HermiT on the Mondo disease ontology. I’m using ROBOT to run the reasoner. After a couple of hours the program fails with this message:
>
>
> Hi, which HermiT build are you using? (i.e., does it use OWLAPI 4 or
> 5? I'm not sure whether ROBOT is still limited to OWLAPI 4 or has
> updated to 5)
>
>
> Here’s the artifact used by ROBOT:
https://github.com/ontodev/robot/blob/f9a4efa7254d162df42341e551795af7a7c6ad9c/pom.xml#L237-L239
>
> net.sourceforge.owlapi:org.semanticweb.hermit:1.3.8.413
>
> ROBOT is still on OWLAPI 4; we just stay aligned with Protege.
Branch version4 of the repository linked above is what was used to
build 1.3.8.413 - it is currently at 1.4.5.456, so it's compatible
with OWLAPI 4.5.6, which is what ROBOT is also using. Building an
enhancement on top of that branch seems like the simplest avenue.
As for the fix needed, I'm a bit lost. The index that's overflowing is
an int and an array can't grow larger than an int value, so just
substituting a long won't do. I'm unclear on whether changing the page
size would help - far as I can tell, it'll still try and keep track of
the number of nodes.
Refactoring the whole class to use longs and having a bidimensional
array to back a collection, which can then pretend to have a larger
address space, seems a possibility, but it's quite a bit of work and
I'm worried about increased memory use for smaller ontologies. Also,
verifying that this actually works for heaps larger than 200G and more
than two billion nodes requires more computational resources than I
can muster :-D it's an interesting problem but I don't think I can
provide a solution in a reasonable timeframe.
Only simple thing that I'd try first: there have been a few changes
from 1.3.8 to the current version4, including a few changes that
should have improved speed and memory consumption, so it might be
worth trying a build of the branch as is, in case the behaviour is
significantly different with your ontology.
Cheers,
I.