TheDeepStream SDK supports a mechanism to add third party or custom algorithms within the reference application by modifying the example plugin (gst-dsexample). The sources for the plugin are in sources/gst-plugins/gst-dsexample directory in the SDK.This plugin was written for GStreamer 1.14.1 but is compatible with newer versions of GStreamer. This plugin derives from the GstBaseTransform class: -libs/html/GstBaseTransform.html
This release includes a simple static library dsexample_lib that demonstrates the interface between custom libraries and this Gstreamer plugin. The library generates simple labels of the form Obj_label. The library implements these functions:
The GStreamer plugin itself is a standard in-place transform plugin. Because it does not generate new buffers but only adds / updates existing metadata, the plugin implements an in-place transform. Some of the code is standard GStreamer plugin boilerplate (e.g. plugin_init, class_init, instance_init). Other functions of interest are as follows:
The pre-compiled deepstream-app binary already has the functionality to parse the configuration and add the sample element to the pipeline.To enable and configure the plugin, add the following section to an existing configuration file (for example, source4_720p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt):
To construct a pipeline for running the plugin to process on objects detected by the primary modelConstruct a pipeline for running the plugin to process on objects detected by the primary model with the following command.
To construct a pipeline for running the plugin to blur objects detected by the primary modelConstruct a pipeline for running the plugin to blur objects detected by the primary model with the following command:
Like other DeepStreamSDK GStreamer plugins, NVTX APIs can be added in a custom plugin as well. More information on these APIs can be found in _tools_extension_library_nvtx.htm. Follow the steps below to add NVTX APIs for custom plugin:
CUDA and CPU memory in NvBufSurface can be accessed through cv::cuda::GpuMat and cv::Mat interface of OpenCV respectively. In that case, NvBufSurface can work with any computer vision algorithm implemented in OpenCV.Following code snippet shows how to access and use CUDA memory of NvBufSurface in OpenCV.
On Jetson platform, if memory of NvBufSurface is of type NVBUF_MEM_SURFACE_ARRAY you should convert it to CUDA through CUDA-EGL interop before accessing it in OpenCV.Refer to sources/gst-plugins/gst-dsexample/gstdsexample.cpp to access the NvBufSurface memory in OpenCV matrix (cv::Mat).Below steps are required:
make and run gst-dsexample.cpp successfully. Just wonder if there is a way to develop gst-dsexample like plugin in python? where is the resource. (I am aware of python binding and did run the deepstream_test_3.py successfully. but just not sure we can use the same binding to develop the plugin directly)
The python gst plugin can be developed based on gst python binding, but it can only be used in python enviroment and there are some lilimtations with this method.
There are some guys who have provided their experiences:
Next guide shows steps to write Gstreamer Plugin in Python for any Computer Vision, Image Processing task and use it in standard Gstreamer pipeline from command line. Additionally this post explains how to get/set own properties from...
DeepStream SDK 4.0 and Visionworks APIs (for dense optical flow estimation) detects the motion of vehicles in your parking lot. Hence, you can understand their direction and report on whether they are entering or exiting the premises. All the pixel-related information about vehicles are then presented to you as data-rich actionable insights.
Nvinfer: Nvinfer is the primary and secondary infer engine used for car detection and classification. The primary engine detects the presence of a car in the parking lot; the secondary engine identifies the car color and type of model.
gst-dsexample: gst-dsexample is a customized sample application for detecting the movement and direction of the recognized cars. Visionworks SDK and OpenCV, a motion estimation algorithm, analyzes any motion in the scene, as well as the ROI regions of detected cars. Hence, it becomes easy to calculate the direction of the respective vehicles.
The DeepStream SDK 4.0 comes prepackaged with sample applications. It provides real-time edge analytics to capture deep-dive insights. The SDK provides easy integration with cloud service providers, containerized deployment and support for Jetson platform.
The sheer scale of the smart city boggles the mind. Tens of billions of sensors will be deployed worldwide, used to make every street, highway, park, airport, parking lot, and building more efficient. This translates to better designs of our roadways to reduce congestion, stronger understanding of consumer behavior in retail environments, and the ability to quickly find lost children to keep our cities safe. Video represents one of the richest sensors used, generating massive streams of data which need analysis. NVIDIA DeepStream 2.0 enables developers to rapidly and simply create video analyics applications.
Humans currently process only a fraction of the captured video. Traditional methods are far less reliable than human interpretation. Intelligent video analytics solves this challenge by using deep learning to understand video with impressive accuracy in real time.
NVIDIA has released the DeepStream Software Development Kit (SDK) 2.0 for Tesla to address the most challenging smart city problems. DeepStream is a key part of the NVIDIA Metropolis platform. The technology enables developers to design and deploy scalable AI applications for intelligent video analytics (IVA). Close to 100 NVIDIA Metropolis partners are already providing products and applications that use deep learning on GPUs.
The DeepStream SDK is based on the open source GStreamer multimedia framework. The plugin architecture provides functionality such as video encode/decode, scaling, inferencing, and more. Plugins can be linked to create an application pipeline. Application developers can take advantage of the reference accelerated plugins for NVIDIA platforms provided as part of this SDK.
A DeepStream application is a set of modular plugins connected in a graph. Each plugin represents a functional block like inference using TensorRT or multi-stream decode. Where applicable, plugins are accelerated using the underlying hardware to deliver maximum performance. Each plugin can be instantiated multiple times in the application as required.
DeepStream ships with a reference application which demonstrates intelligent video analytics for multiple camera streams in real-time. The provided reference application accepts input from various types of sources like camera, RTSP streams, and disk. It can accept RAW or encoded video data from multiple sources simultaneously. The video aggregator plugin (nvstreammux) forms a batch of buffers from these input sources. The TensorRT based plugin (nvinfer) then detects primary objects in this batch of frames. The KLT based tracker element (nvtracker) generates unique ID for each object and tracks them.
The Tiler plugin (nvmultistreamtiler) composites this batch into a single frame as a 2D array. The DeepStream OSD plugin (nvosd) draws shaded boxes, rectangles, and text on the composited frame using the generated metadata as you can see in Figure 3.
Metadata contains information generated by all the plugins in the graph. Each plugin adds incremental information to the metadata. The NvDsMeta structure is used by all the components in the application to represent object related metadata. Refer to gstnvivameta_api.h and the associated documentation for DeepStream metadata specifications.
The DeepStream SDK provides a template plugin that can be used for implementing custom libraries, algorithms, and neural networks in order to seamlessly achieve plug and play functionality with an application graph. The sources for this template are located in sources directory in gst-dsexample_sources.
As part of the template plugin, a static library dsexample_lib is provided for interfacing of custom IP. The library generates string labels to show the integration of a library output with DeepStream SDK metadata format. The library implements four functions:
The application requires additional changes for parsing the configuration file related to the library and adding this new custom element to the pipeline. Refer to the following files in nvgstiva-app_sources for the changes required:
The pre-compiled deepstream-app binary already included functionality to parse the configuration and add the sample elements to the pipeline. Add the following section to an existing configuration file to enable and configure the plugin:
Implementing custom logic or algorithms within the plugin requires replacing the below function calls DsExampleCtx_Init, DsExampleCtx_Deinit, DsExample_QueueInput, DsExample_DequeueOutput with corresponding functions of any other custom library.
Consider an IP camera capturing a live feed to monitor people as they enter and exit a location. The encoded camera stream needs to be decoded to either H.264 or H.265 formats. The inferencing engine identifies people and wraps a bounding box around them once the stream has been decoded. These bounding boxes feed into an object tracker which generates a unique ID for each person. The resulting analytics can then be output to whatever display the developer needs as well as being stored for later use.
DeepStream represents a concurrent software system with the various plugins executing together while processing and passing frames through the pipeline. The performance of a DeepStream application depends on the execution characteristics of the individual plugins in conjunction with how efficiently they share the underlying hardware as they execute concurrently.
3a8082e126