Parallel processing in C++ ARM64

66 views
Skip to first unread message

oneiro

unread,
Oct 10, 2023, 10:19:50 PM10/10/23
to doubango-ai
Hi, I am looking for some clarification on these points. I am using Doubango ARM64 C++ implementation.

1. When and how is the parallel processing callback is run: is it on a separate thread, or called when UltAlprSdkEngine::process is called?

2. What happens in parallel mode if UltAlprSdkEngine::process is called faster than the results can be processed?

3. How do I hide all logs?. Setting debug_level to fatal doesn't do enough. On a jetson the output below is logged constantly. Our current solution of piping stderr to /dev/null is a poor fix
    ```log
    **[PLUGIN_TENSORRT WARN]: function: "log()" 
    file: "/home/nx/Projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx" 
    line: "36" 
    message: [TensorRT Inference] From logger: The enqueue() method has been deprecated when used with engines built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. Please use enqueueV2() instead.
    ```

4. Is there a way to wait for all pending parallel processing callbacks to run before continuing?

5. Do you have documentation on each field in the JSON result (what they mean, are they optional) ?

Thank you for your product!

5. 

Mamadou DIOP

unread,
Oct 11, 2023, 6:11:41 PM10/11/23
to oneiro, doubango-ai

Hi,

On 10/11/2023 4:06 AM, oneiro wrote:
Hi, I am looking for some clarification on these points. I am using Doubango ARM64 C++ implementation.

1. When and how is the parallel processing callback is run: is it on a separate thread, or called when UltAlprSdkEngine::process is called?
The result is delivered using a different thread.

2. What happens in parallel mode if UltAlprSdkEngine::process is called faster than the results can be processed?
There is a queue and it'll be filled to a certain point. This "certain point" is controlled using configuration entry https://www.doubango.org/SDKs/anpr/docs/Configuration_options.html#max-latency


3. How do I hide all logs?. Setting debug_level to fatal doesn't do enough. On a jetson the output below is logged constantly. Our current solution of piping stderr to /dev/null is a poor fix
    ```log
    **[PLUGIN_TENSORRT WARN]: function: "log()" 
    file: "/home/nx/Projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx" 
    line: "36" 
    message: [TensorRT Inference] From logger: The enqueue() method has been deprecated when used with engines built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. Please use enqueueV2() instead.
    ```

I have already explained that this is because you're using a version (5.1.1) we don't officially support. The "debug_level" configuration entry has no effect on the plugins (OpenVINO, TensorRT, AmlogicNPU...)

4. Is there a way to wait for all pending parallel processing callbacks to run before continuing?
stop calling process(). The JSON result has an entry ("latency") to tell you how many frames are pending (inside the queue)


5. Do you have documentation on each field in the JSON result (what they mean, are they optional) ?

Thank you for your product!

5.  --
You received this message because you are subscribed to the Google Groups "doubango-ai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to doubango-ai...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/doubango-ai/04a4403f-54cf-4bca-8725-dc0f7030c8e2n%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages