Hello,
I recently activated a license on a Jetson NX device and tried running ultimateALPR with its default settings. Everything functioned correctly (It seems like). However, when I toggled any of the configuration options (klass_lpci_enabled, klass_vcr_enabled, klass_vmmr_enabled, klass_vbsr_enabled) to True, multiple warnings began to appear. These warnings are triggered for each image that ultimateALPR processes.
Here are the repeated warnings:
[PLUGIN_TENSORRT WARN]: function: "log()"
file: "/home/nx/Projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx"
line: "36"
message: [TensorRT Inference] From logger: The enqueue() method has been deprecated when used with engines built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. Please use enqueueV2() instead.
[PLUGIN_TENSORRT WARN]: function: "log()"
file: "/home/nx/Projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx"
line: "36"
message: [TensorRT Inference] From logger: Also, the batchSize argument passed into this function has no effect on changing the input shapes. Please use setBindingDimensions() function to change input shapes instead.
Based on the warnings, it appears there's an inconsistency with TensorRT and the mentioned plugin_tensorrt_inference_engine.cxx file. Interestingly, I can't find this file on my device (and even path to it).
For reference:
Any assistance or input into resolving these warnings would be greatly appreciated.
Thank you!
Hi,
These warnings have no effect on accuracy or speed. I remember in JetPack 4.x NVIDIA made explicit batch size required when the models are optimized but there was no enqueueV2 function. I guess they deprecated something else again.
Run $apt-cache show nvidia-jetpack | grep
"Version:" on your device and share the logs
--
You received this message because you are subscribed to the Google Groups "doubango-ai" group.
To unsubscribe from this group and stop receiving emails from it, send an email to doubango-ai...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/doubango-ai/85909431-d8c3-428e-9c09-d158cea8ab33n%40googlegroups.com.
Additional note: Quote from https://github.com/DoubangoTelecom/ultimateALPR-SDK/blob/master/Jetson.md#requirements : "We require JetPack 4.4.1 or JetPack 5.1.0."
you said you're using 5.1.1 which isn't
officially supported. So, do you have same warnings with 5.1.0?
To view this discussion on the web visit https://groups.google.com/d/msgid/doubango-ai/7f80adc9-6f26-f540-751b-eb510e5a0177%40doubango.org.
You can change the debug level on the SDK but it'll not be propagated to TensorRT and OpenVINO plugins: https://www.doubango.org/SDKs/anpr/docs/Configuration_options.html#debug-level
The WARN message is printed using macro: #   define PLUGIN_TENSORRT_PRINT_WARN(FMT, ...) fprintf(stderr, "**[PLUGIN_TENSORRT WARN]: function: \"%s()\" \nfile: \"%s\" \nline: \"%u\" \nmessage: " FMT "\n", __FUNCTION__, __FILE__, __LINE__, ##__VA_ARGS__)
"__FILE__" defines the current source file which means "/home/nx/Projects/ultimateTRT/pluginTensorRT/source/plugin_tensorrt_inference_engine.cxx" is a local source file I built the plugin on. It's normal you cannot find it on your PC, it's on my Jetson NX.
The macro writes the warn on stderr, you can redirect to null stream to inhibit the message. I don't know if it could be done just for the shared lib.
Open a ticket on the issue tracker:
https://github.com/DoubangoTelecom/ultimateALPR-SDK/issues
To view this discussion on the web visit https://groups.google.com/d/msgid/doubango-ai/02e25737-9106-45bb-ba8f-004fe84d17e9n%40googlegroups.com.
I also forgot to mention that, while running code several times one after another with toggled any of the configuration options (klass_lpci_enabled, klass_vcr_enabled, klass_vmmr_enabled, klass_vbsr_enabled) to True, Segmentation fault accrues from the same file: plugin_tensorrt_inference_engine.cxx which I do not have access to.This does not happen at all when all 4 configuration options mentioned above is toggled to False.
You may have segmentation fault when the plugin is detached which is when your app/process has exited/ended. That's OK. If you have segmentation fault while your app is running, then it's NOK. Share information on how to reproduce if you're using Jetpack 441 or 510.
We have thousands of Jetsons running the SDK daily, haven't seen
any crash report for the last 3 years.
To view this discussion on the web visit https://groups.google.com/d/msgid/doubango-ai/35259c2a-4a05-4d9b-a259-aae51df90035n%40googlegroups.com.
In attempting to reproduce the error using Jetpack 5.1.0 (Version: 5.1-b147), I consistently encountered the warning for every invocation of the checkResult() function associated with image processing. The warning details are as follows:
Here are the steps I took for reproduction:
1. Installed Jetson Linux 35.2.1 on Jetson NX. (https://developer.nvidia.com/embedded/jetson-linux-r3521)
2. After startup, executed these commands:
sudo apt-get update; sudo apt-get install python3-pip
sudo pip3 install -U jetson-stats
sudo apt update; sudo apt dist-upgrade
sudo apt-get install dkms git build-essential
sudo apt install nvidia-jetpack
pip install Cython
cd ultimateALPR-SDK/binaries/jetson/aarch64
sudo chmod +x ./prepare.sh && sudo ./prepare.sh
python ../../../python/setup.py build_ext --inplace -v
PYTHONPATH=$PYTHONPATH:.:../../../python \
LD_LIBRARY_PATH=.:$LD_LIBRARY_PATH \
python ../../../samples/python/recognizer/recognizer.py --image ../../../assets/images/lic_us_1280x720.jpg --assets ../../../assets --klass_lpci_enabled True
also noted that when I activated the klass_vcr_enabled option, the error count doubled, reflecting warnings for both lpci and vcr.
Here you can see a part of the jtop with versions for CUDA/cuDNN/TensorRT and other things. Where main problem most likely is with TenosrRT:

Considering my steps and the environment, could there be an inconsistency with the Jetson Linux version or the automatically installed TensorRT version via the "install nvidia-jetpack" command? Are there any other problems with the setup?
Hi,
Checkout
https://github.com/DoubangoTelecom/ultimateALPR-SDK/commit/b9ef277a1c4bf23409e1c05d43eadc3052e661f2
To view this discussion on the web visit https://groups.google.com/d/msgid/doubango-ai/1f491d25-e7c1-48bc-91f0-b7290428c44an%40googlegroups.com.