Hi Stephen,
Thanks for the response. The queries that I fire are run-time queries that I process in the code using gremlin-core libraries. (Specifically, the functions Client.submit() for sending the query to the server and ResultSet.all() to process the response). Since the nature of the query is unknown to me before I send it to the server, my fault tolerance options are limited.
The valueMap function will not work for me as I use Edge and Vertex functions on the response like id(), inVertex(), outVertex() etc in my code. The valueMap function returns a Map that will not support any of these functions.
I did a load test on the two parameters that you mentioned - maxContentLength and resultIterationBatchSize. What I found was that no matter what the value of the resultIterationBatchSize parameter is, as long as the value of maxContentLength parameter is low, I will get the above error. I am testing with g.V() and g.E() queries as these return the responses with the highest load. So for around 1000 edges and 400 vertices:
maxContentLength = 65536, resultIterationBatchSize = 64/32/16 - give max frame length exceeded error.
maxContentLength = 655369, resultIterationBatchSize = 64/32/16 - works.
As you mentioned, increasing the maxContentLength value to 2MB is not a big deal, my concern is how do I be sure that it is enough. Specifically, how do I determine the cut off for the number of edges or vertices that I return so that I do not get this error?
Just to be sure that the configuration that I am using is correct, I am specifying maxContentLength and resultIterationBatchSize in 2 places - server-side yaml file and on the client-side yaml file with the same values. Is this correct?
On the client-side, I am using ResultSet.all() method to process the response. Is there a way that I can apply batching in the client-side code explicitly to avoid this error?
Anya