You may want to run your app at the DeepStream level to:
A. generate an engine file for your new environment
B. debug an app to see how it works
This can be achieved in the following way.
1. go to your resource folder
/mnt/nvme/toolkit_home/applications/MyCounter/resource$ ls -l
total 3572
-rw-r--r-- 1 nvidia nvidia 3233 Apr 21 20:13 dstest1_pgie_config_debug.txt
-rw-r--r-- 1 nvidia nvidia 3240 Apr 10 13:49 dstest1_pgie_config.txt
-rw-r--r-- 1 nvidia nvidia 3638560 Jan 13 08:19 libnvds_nvdcf.so
drwxr-xr-x 3 nvidia nvidia 4096 Dec 12 08:14 models
-rw-r--r-- 1 nvidia nvidia 1684 Jan 1 19:03 tracker_config.yml
2. create a DeepStream config file for debug
$ diff dstest1_pgie_config.txt dstest1_pgie_config_debug.txt
63,65c63,65
< model-file=models/Primary_Detector/resnet10.caffemodel.gpg
< proto-file=models/Primary_Detector/resnet10.prototxt.gpg
< model-engine-file=models/Primary_Detector/resnet10.caffemodel_b1_fp16.engine.gpg
---
> model-file=models/Primary_Detector/resnet10.caffemodel
> proto-file=models/Primary_Detector/resnet10.prototxt
> #model-engine-file=models/Primary_Detector/resnet10.caffemodel_b1_fp16.engine.gpg
* comment out your engine file, remove gpg from your model config like caffemodel and prototxt
3. launch a simple GStreamer pipeline by giving the debug config file
/mnt/nvme/toolkit_home/applications/MyCounter/resource$ /mnt/nvme/toolkit_home/bin/launch_dsconfig.sh dstest1_pgie_config_debug.txt
Setting pipeline to PAUSED ...
Using winsys: x11
Creating LL OSD context new
0:00:02.974478850 6087 0x559c5f9870 INFO nvinfer gstnvinfer.cpp:549:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:02.974910571 6087 0x559c5f9870 WARN nvinfer gstnvinfer.cpp:545:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]:generateTRTModel(): INT8 not supported by platform. Trying FP16 mode.
0:00:02.975011561 6087 0x559c5f9870 INFO nvinfer gstnvinfer.cpp:549:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]:generateTRTModel(): Parsing prototxt a binaryproto Caffe model from files
0:03:44.011663936 6087 0x559c5f9870 INFO nvinfer gstnvinfer.cpp:549:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /mnt/nvme/toolkit_home/applications/MyCounter/resource/models/Primary_Detector/resnet10.caffemodel_b1_fp16.engine
Pipeline is PREROLLING ...
Got context from element 'eglglessink0': gst.egl.EGLDisplay=context, display=(GstEGLDisplay)NULL;
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Creating LL OSD context new
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0: Output window was closed
Additional debug info:
/dvs/git/dirty/git-master_linux/3rdparty/gst/gst-nveglglessink/ext/eglgles/gsteglglessink.c(882): gst_eglglessink_event_thread (): /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0
Execution ended after 0:00:05.396495354
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...
4. Check how it worked. You should be able to find your newly created engine file.
/mnt/nvme/toolkit_home/applications/MyCounter/resource$ ls -l models/Primary_Detector/ | grep engine
-rw-r--r-- 1 nvidia nvidia 7950717 Apr 21 20:17 resnet10.caffemodel_b1_fp16.engine
If you have an older version of the toolkit, you can write your launch script as follows.
$ cat launch_dsconfig.sh
#!/bin/bash
gst-launch-1.0 filesrc location=/opt/nvidia/deepstream/deepstream-4.0/samples/streams/sample_1080p_h264.mp4 ! \
decodebin ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 \
height=720 ! nvinfer config-file-path=$1 ! \
nvvideoconvert ! nvdsosd ! nvegltransform ! nveglglessink sync=false