Trying to run nvidia s deepstream5.0 sdk (sample program) on ubuntu 18.04 by following the document (https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html).
The application is installed in the path: "/opt/nvidia/deepstream/deepstream-5.0/".
The execution command is "deepstream-app -c <config file>"
Example:
"deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt"
However,got the run-time error. The error report is given below.
amarnath@amarnath-Precision-T3610:/opt$ deepstream-app -c /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt
ERROR: ../nvdsinfer/nvdsinfer_model_builder.cpp:1408 Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine open error
0:00:00.324481231 31390 0x564e46ff5ea0 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1566> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed
0:00:00.324517060 31390 0x564e46ff5ea0 WARN nvinfer gstnvinfer.cpp:599:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1673> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed, try rebuild
0:00:00.324530469 31390 0x564e46ff5ea0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1591> [UID = 1]: Trying to create engine from model files
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1163 INT8 not supported by platform. Trying FP16 mode.
WARNING: ../nvdsinfer/nvdsinfer_model_builder.cpp:1177 FP16 not supported by platform. Using FP32 mode.
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: TensorRT was linked against cuDNN 7.6.4 but loaded cuDNN 7.6.3
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
INFO: ../nvdsinfer/nvdsinfer_func_utils.cpp:37 [TRT]: Detected 1 inputs and 2 output network tensors.
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: TensorRT was linked against cuDNN 7.6.4 but loaded cuDNN 7.6.3
0:00:04.170021642 31390 0x564e46ff5ea0 INFO nvinfer gstnvinfer.cpp:602:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1624> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-5.0/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_fp32.engine successfully
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: TensorRT was linked against cuDNN 7.6.4 but loaded cuDNN 7.6.3
WARNING: ../nvdsinfer/nvdsinfer_func_utils.cpp:34 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:00:04.175528889 31390 0x564e46ff5ea0 INFO nvinfer gstnvinfer_impl.cpp:311:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/deepstream-app/config_infer_primary.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.
** INFO: <bus_callback:181>: Pipeline ready
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception
ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch
ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:04.432182851 31390 0x564e400e0b20 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1188): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception
ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch
ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:04.476620553 31390 0x564e400e0b20 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception
ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch
ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:04.541993813 31390 0x564e400e0b20 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception
ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch
ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:04.608814180 31390 0x564e400e0b20 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1188): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1188): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR from primary_gie: Failed to queue input batch for inferencing
Debug info: gstnvinfer.cpp(1188): gst_nvinfer_input_queue_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception
ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch
ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:04.774548068 31390 0x564e400e0b20 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception
ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch
ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:04.778180781 31390 0x564e400e0b20 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: engine.cpp (418) - Cuda Error in enqueueInternal: 209 (no kernel image is available for execution on the device)
ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:31 [TRT]: FAILED_EXECUTION: std::exception
ERROR: nvdsinfer_backend.cpp:290 Failed to enqueue inference batch
ERROR: nvdsinfer_context_impl.cpp:1408 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
0:00:04.848116827 31390 0x564e400e0b20 WARN nvinfer gstnvinfer.cpp:1188:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
App run failed
The output screen:
My nvidia driver and cuda version shown below:
Any help is appreciated.