Hi,
We can't give you a general number because the insert performance
depends on too many variables. One of the most important thing is
the number, type and indexing of attributes for the nodes/edges that
you are inserting. Other very important factors are the cache size
and the disks used. In addition the insert rate may get slower when
the database grows.
We suggest that you try it with your own data first, considering the
recommendations about cache size, recovery and rollback we provided
before in this question thread. In case you find it to be too slow
for your requirements there is the "extent size" setting that you
could ty to get a significant boost of performance but taking into
account that (only) this one in particular has it's drawbacks
because it will not allow you to have the recovery functionality
activated normally once the database is loaded.
If you want to try it, use the "setExtentSize" SparkseeConfig api to
"64".
Or, if you are using the scriptparser, you may want to add this line
to the "sparksee.cfg" file.
----------------------------------------------
sparksee.storage.extentsize=64
----------------------------------------------
The method description from
here
is wrong, but the argument description is ok.
Changing the extent size like in the example above implies opening
the database always with this same setting.
Please consider it throroughly and only in case you definitely reach
an unacceptable loading ratio, which should be very rare case. Also
first, consider contacting us for some assistance with your loading
or setting process.
Best regards
El dimarts, 10 febrer de 2015 8:28:22 UTC+1, oj va escriure: