unfortunately the problem seems to be on our side in the JSON API in general
(not python-specific) - to be more precise: in the yaws code.
If I see it correctly, the server (or client) is splitting up the request into
several partial data chunks - the handling of this seems to be wrong. We don't
have such a test case yet - I'll add one and have a look on how to handle this
correctly.
If it's not much to do, I can hopefully fix this tomorrow.
If you only need to write the data to a single key, i.e. you don't need
consistency over multiple keys for each data item, prefer using the
TransactionSingleOp class. This does not transfer a tlog back to the client
and should thus be faster than writing multiple keys in a single reqlist.
If it is not faster, we should maybe introduce something like a reqlist for
single operations, too....
Nico
Any requests larger than this configured size will return with a HTTP 413
error (Request Entity Too Large).
The web debug interface does not use the JSON API and was not affected.
The Java-API has not been affected either since it communicates with erlang
directly (without yaws).
Nico
hmm - can't really see anything suspicious in your logs - is this error
reproducible? what do you mean by "big data load"?
Also: could you produce a minimal test case? Alternatively add debug output to
api_json.erl to see what happens behind the scenes, i.e. on the server.
If you want more output from the client code (maybe this helps understanding
the error), add self._conn.set_debuglevel(1) prior to the call to
self._conn.request in Scalaris.py
Nico
timeouts in scalaris.cfg should not affect this
Nico