Hi,
it depends a bit on how big the documents are.
For smaller documents it will make sense to insert/import data with multiple parallel client threads.
If the documents are "big" and writing them to the storage engine becomes the bottleneck, then parallelizing the insert/import will not help so much.
You may try out how much parallelization will help you by importing data in parallel using the bundled arangoimport binary.
arangoimport provides an option `--threads`, which defaults to 2. You can try modifying the values for this option from 1 to whatever upper bound you think could make sense to see if there is any difference in the runtime of the import process.
Apart from this, it will very likely make sense to insert documents in parallel if the single-document APIs are used. This is because the actual insertion time will only be a small fraction of each request, and a great deal of time will be spent for processing requests, putting together responses and waiting for the network. Here parallelization should help a lot.
It may be different if you are already sending multiple documents to the server in a single batch, e.g. using the import API at POST /_api/import, or by sending an array of documents to POST /_api/document. Here the server may already be quite busy, but maybe parallelization can still help at least to some extent here.
I suggest trying with arangoimport first to assess the potential benefits (if any).
If you are using, please use the import format that has a single JSON document per line (jsonl).
Best regards
Jan