Hi,
Hi, I am looking for some clarification on these points. I am using Doubango ARM64 C++ implementation.
1. When and how is the parallel processing callback is run: is it on a separate thread, or called when UltAlprSdkEngine::process is called?
2. What happens in parallel mode if UltAlprSdkEngine::process is called faster than the results can be processed?
3. How do I hide all logs?. Setting debug_level to fatal doesn't do enough. On a jetson the output below is logged constantly. Our current solution of piping stderr to /dev/null is a poor fix
```log
**[PLUGIN_TENSORRT WARN]: function: "log()"
file: "/home/nx/Projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx"
line: "36"
message: [TensorRT Inference] From logger: The enqueue() method has been deprecated when used with engines built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. Please use enqueueV2() instead.
```
4. Is there a way to wait for all pending parallel processing callbacks to run before continuing?
5. Do you have documentation on each field in the JSON result (what they mean, are they optional) ?
Thank you for your product!
5. --
You received this message because you are subscribed to the Google Groups "doubango-ai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to doubango-ai...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/doubango-ai/04a4403f-54cf-4bca-8725-dc0f7030c8e2n%40googlegroups.com.