Hi,
By default Sparksee tries to use as much memory as possible and it
doesn't use the disk until it has to. That may improve the
performance for certain use cases but can sometimes lead to other
not desired behaviours like in your specific case.
The solution to the first issue you are encountering is enabling the
recovery system. With the recovery enabled the data is periodically
written to disk and a recovery log is created to guarantee the
persistence. Therefore the destructor will no longer need to write
much information because Sparksee already has the information on disk.
You can use the config file option:
sparksee.io.recovery=TRUE
Or The SparkseeConfig
method:
SetRecoveryEnabled( true )
About your second issue, the amount of cache memory that Sparksee
uses by default is probably too much for your case. Sparksee is
designed specifically to work with databases bigger than your
available memory, so there is no problem in setting a lower memory limit so it will use more the disk.
You can use the config file option:
sparksee.io.cache.maxsize=SIZE_IN_MB
Or The SparkseeConfig
method:
SetCacheMaxSize( SIZE_IN_MB )
Best regards.
El dijous, 7 gener de 2016 13:05:45 UTC+1, PL va escriure: