Improving triple load performance

33 views
Skip to first unread message

Claudia S.

unread,
May 13, 2019, 6:53:41 AM5/13/19
to Halyard Users
We are currently evaluating Halyard as a scalable datastore to use for our company. Are part of that, we are running a series of benchmarking tests.

When it comes to loading triples (one by one, not bulk load) we are experiencing fairly low performance compared to other datastores we have looked at. We have tried both going through the SPARQL endpoint as well as going directly through the Sail API with similar results. We were seeing on average:
  • 12,805 triples per second on HBase 1.1.2, HDFS 2.7.3
  • 15,000 triples per second on HBase 1.2.1, HDFS 2.7.7
We have used out-of-the box ansible hortonworks playbooks on openstack for the first installation (1 name node and 3 data nodes), and a docker image https://github.com/big-data-europe/docker-hbase.git for the second (single-node installation). We have made sure that auto-commit and auto-flush are off; we commit periodically every 500,000 triples.

Are there any settings (Halyard, Hbase, HDFS, etc.) we could try to change to make significant improvements to triple load performance? We have tried increasing memory for  HBase processes with no significant improvements. I realise that we are not giving you very much detail on our particular setup but we are just looking for a starting point.

Adam Sotona

unread,
May 13, 2019, 11:36:53 AM5/13/19
to claudia....@exfo.com, Halyard Users
Hi Claudia,
Your benchmark numbers seem to be what I would expect from a single client pushing to a single endpoint. 
For single client benchmarking the bottleneck is in the latency and single endpoint thread handling most all the load.
When multiple clients start bombing one endpoint the bottleneck become endpoint CPU and that single node communication limits with HBase.
However if multiple clients will bomb multiple endpoints (ideally through load balancer) the performance can grow up to the write limits of the whole HBase cluster. In that case it may sense to play with maximum connections and memory configuration of the HBase.
However HBase still have write performance limits based on load distribution across regions. In any case you should pre-split the target table into expected number of regions, so the region servers will split the load.
If you expect to work with really big data, I would suggest to pre-split to large number of regions and ideally chunk the data and bulk-load them.
In this scenario the performance is limited by hardware only. My measurement in this case are around 40.000 triples per cluster node per second, or 100 billion triples on 10 nodes cluster in 72 hours.

Thanks,
Adam

Dne po 13. 5. 2019 12:53 uživatel Claudia S. <claudia....@exfo.com> napsal:
--
You received this message because you are subscribed to the Google Groups "Halyard Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to halyard-user...@googlegroups.com.
To post to this group, send email to halyar...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/halyard-users/470a921f-40d1-4150-8120-ef2eacc28e47%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
Reply all
Reply to author
Forward
0 new messages