I'm using Arango in standalone mode(single node), with machine specs of 16gb RAM and 8 cores. No other process is running on the machine. I have some 4000 collections each having 15000 documents stored in it. My typical document is either a vertex (containing lat-long data), or an edge containing document hashes for two vertexes it connects.
I am doing some load testing where I have to see what is the read performance arango can give. For the load testing, I have taken 4 EC2 instances, and I'm running read-only geospatial-queries on the arango node (queries like nearest node to given lat-long). The queries are being made using go-clients with proper connection pooling.
Problem : Even using the 4 load testing machines, the max number of requests which shows up on arango _system-monitoring UI caps at 4000requests/sec. Even if I reduce the number of loadTesting machines, the number stays at 3000-4000, indicating that I'm indeed generating higher amounts of traffic from the machines.
My Arango instance CPU usage sticks at 100% all the time. Memory usage caps at 13-14GB(out of 16GB). The average request served time shown at arango is 20ms(which is acceptable given the un-optimised query). There are no network/disk restrictions (confirmed using the AWS network/disk monitoring).
Questions :