ultimateALPR: PLUGIN_TENSORRT WARN function: "log()"

197 views
Skip to first unread message

Danielius Kocan

unread,
Oct 3, 2023, 8:18:06 AM10/3/23
to doubango-ai

Hello,

I recently activated a license on a Jetson NX device and tried running ultimateALPR with its default settings. Everything functioned correctly (It seems like). However, when I toggled any of the configuration options (klass_lpci_enabled, klass_vcr_enabled, klass_vmmr_enabled, klass_vbsr_enabled) to True, multiple warnings began to appear. These warnings are triggered for each image that ultimateALPR processes.

Here are the repeated warnings:
[PLUGIN_TENSORRT WARN]: function: "log()"
file: "/home/nx/Projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx"
line: "36"
message: [TensorRT Inference] From logger: The enqueue() method has been deprecated when used with engines built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. Please use enqueueV2() instead.

[PLUGIN_TENSORRT WARN]: function: "log()"
file: "/home/nx/Projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx"
line: "36"
message: [TensorRT Inference] From logger: Also, the batchSize argument passed into this function has no effect on changing the input shapes. Please use setBindingDimensions() function to change input shapes instead.

Based on the warnings, it appears there's an inconsistency with TensorRT and the mentioned plugin_tensorrt_inference_engine.cxx file. Interestingly, I can't find this file on my device (and even path to it).

For reference:

  • TensorRT (as seen in jtop): 5.1.1
  • TensorRT Python version: 8.5.2.2

Any assistance or input into resolving these warnings would be greatly appreciated.

Thank you!

Mamadou DIOP

unread,
Oct 3, 2023, 8:33:00 AM10/3/23
to Danielius Kocan, doubango-ai

Hi,

These warnings have no effect on accuracy or speed. I remember in JetPack 4.x NVIDIA made explicit batch size required when the models are optimized but there was no enqueueV2 function. I guess they deprecated something else again.

Run $apt-cache show nvidia-jetpack | grep "Version:" on your device and share the logs

--
You received this message because you are subscribed to the Google Groups "doubango-ai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to doubango-ai...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/doubango-ai/85909431-d8c3-428e-9c09-d158cea8ab33n%40googlegroups.com.

Mamadou DIOP

unread,
Oct 3, 2023, 8:36:25 AM10/3/23
to Danielius Kocan, doubango-ai

Additional note: Quote from https://github.com/DoubangoTelecom/ultimateALPR-SDK/blob/master/Jetson.md#requirements : "We require JetPack 4.4.1 or JetPack 5.1.0."

you said you're using 5.1.1 which isn't officially supported. So, do you have same warnings with 5.1.0?

Danielius Kocan

unread,
Oct 3, 2023, 8:53:54 AM10/3/23
to doubango-ai
(I think I have send answer just to you, so I am sending it again here)
I got Version: 5.1.1-b56 as an answer for the grep call. So, it seems like I am using JetPack 5.1.1. But thank you for the answer anyways. I will try to hide the warning somehow in this case.
I might also try to use JetPack 5.1.0 later on. However, I think i will need to stay with JetPack 5.1.1 at the production.

Danielius Kocan

unread,
Oct 3, 2023, 9:12:30 AM10/3/23
to doubango-ai
I also forgot to mention that, while running code several times one after another with toggled any of the configuration options (klass_lpci_enabled, klass_vcr_enabled, klass_vmmr_enabled, klass_vbsr_enabled) to True, Segmentation fault accrues from the same file: plugin_tensorrt_inference_engine.cxx which I do not have access to.This does not happen at all when all 4 configuration options mentioned above is toggled to False.

Mamadou DIOP

unread,
Oct 3, 2023, 9:14:47 AM10/3/23
to Danielius Kocan, doubango-ai

You can change the debug level on the SDK but it'll not be propagated to TensorRT and OpenVINO plugins: https://www.doubango.org/SDKs/anpr/docs/Configuration_options.html#debug-level

The WARN message is printed using macro: #    define PLUGIN_TENSORRT_PRINT_WARN(FMT, ...) fprintf(stderr, "**[PLUGIN_TENSORRT WARN]: function: \"%s()\" \nfile: \"%s\" \nline: \"%u\" \nmessage: " FMT "\n", __FUNCTION__,  __FILE__, __LINE__, ##__VA_ARGS__)

"__FILE__" defines the current source file which means "/home/nx/Projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx" is a local source file I built the plugin on. It's normal you cannot find it on your PC, it's on my Jetson NX.

The macro writes the warn on stderr, you can redirect to null stream to inhibit the message. I don't know if it could be done just for the shared lib.

Open a ticket on the issue tracker: https://github.com/DoubangoTelecom/ultimateALPR-SDK/issues

Mamadou DIOP

unread,
Oct 3, 2023, 9:19:58 AM10/3/23
to Danielius Kocan, doubango-ai


On 10/3/2023 3:12 PM, Danielius Kocan wrote:
I also forgot to mention that, while running code several times one after another with toggled any of the configuration options (klass_lpci_enabled, klass_vcr_enabled, klass_vmmr_enabled, klass_vbsr_enabled) to True, Segmentation fault accrues from the same file: plugin_tensorrt_inference_engine.cxx which I do not have access to.This does not happen at all when all 4 configuration options mentioned above is toggled to False.

You may have segmentation fault when the plugin is detached which is when your app/process has exited/ended. That's OK. If you have segmentation fault while your app is running, then it's NOK. Share information on how to reproduce if you're using Jetpack 441 or 510.

We have thousands of Jetsons running the SDK daily, haven't seen any crash report for the last 3 years.

Danielius Kocan

unread,
Oct 17, 2023, 7:25:22 AM10/17/23
to doubango-ai
Hi,

In attempting to reproduce the error using Jetpack 5.1.0 (Version: 5.1-b147), I consistently encountered the warning for every invocation of the checkResult() function associated with image processing. The warning details are as follows:

  • Warning 1:
    • Function: log()
    • File Path: /home/nx/Projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx
    • Line: 36
    • Message: [TensorRT Inference] The enqueue() method is now deprecated for engines constructed from a network using the NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. The recommended method is enqueueV2().
  • Warning 2:
    • Function: log()
    • File Path: /home/nx/Projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx
    • Line: 36
    • Message: [TensorRT Inference] The batchSize argument provided to this function doesn't modify input shapes. It's advised to employ the setBindingDimensions() function to alter input shapes.

Here are the steps I took for reproduction:

1. Installed Jetson Linux 35.2.1 on Jetson NX. (https://developer.nvidia.com/embedded/jetson-linux-r3521)

2. After startup, executed these commands:

sudo apt-get update; sudo apt-get install python3-pip
sudo pip3 install -U jetson-stats
sudo apt update; sudo apt dist-upgrade
sudo apt-get install dkms git build-essential
sudo apt install nvidia-jetpack
pip install Cython
cd ultimateALPR-SDK/binaries/jetson/aarch64
sudo chmod +x ./prepare.sh && sudo ./prepare.sh
python ../../../python/setup.py build_ext --inplace -v

PYTHONPATH=$PYTHONPATH:.:../../../python \
LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH \
python ../../../samples/python/recognizer/recognizer.py --image ../../../assets/images/lic_us_1280x720.jpg --assets ../../../assets --klass_lpci_enabled True


also noted that when I activated the klass_vcr_enabled option, the error count doubled, reflecting warnings for both lpci and vcr.

Here you can see a part of the jtop with versions for CUDA/cuDNN/TensorRT and other things. Where main problem most likely is with TenosrRT:

Screenshot from 2023-10-17 11-16-09.png

Considering my steps and the environment, could there be an inconsistency with the Jetson Linux version or the automatically installed TensorRT version via the "install nvidia-jetpack" command? Are there any other problems with the setup?

Mamadou DIOP

unread,
Oct 18, 2023, 6:35:04 PM10/18/23
to Danielius Kocan, doubango-ai

Danielius Kocan

unread,
Oct 19, 2023, 3:05:30 AM10/19/23
to doubango-ai
This have solved the problem. Now it only prints once at the start. So, now I can remove the extra code I wrote for redirection of the stream from the stderr.
Thank you a lot!
Reply all
Reply to author
Forward
0 new messages