I have noticed that the inference speed using the GPU is slower than that of the CPU, and I am uncertain about the underlying issue. #2802
Unanswered
Jeck-Liu-Create
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have noticed that the inference speed using the GPU is slower than that of the CPU, and I am uncertain about the underlying issue.
Introduction:
I followed the method outlined in the documentation for manual compilation on Windows. The steps are as follows:
PPLCV
CUDA + TensorRT
In this process, I modified the official documentation from
-DMMDEPLOY_TARGET_BACKENDS="trt"
to-DMMDEPLOY_TARGET_BACKENDS="ort;trt"
because using-DMMDEPLOY_TARGET_BACKENDS="trt"
resulted in an error.Code
Below is the modified program I created to perform inference using DBNet independently.
debug
d:/my_progarm/mmdeploy/mmdeploy_models/mmocr/dbnet/ortc:/Users/12994/source/repos/TextDetection/x64/Release/demo_text_det.jpg --device cuda
output
Using the CPU with ONNX Runtime as outlined in the documentation.
output
Beta Was this translation helpful? Give feedback.
All reactions