Replies: 1 comment
-
You shoud first analyze the time using of each part. You can refer to this doc If the model inference time (trt inference without pre/post processing) already takes much than 100ms, I think use a smaller model could be a better choice |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I trained the mmocr model using dbnet&satrn (small), and export tensorrt model
C++time statistics:
dbnet model : 30ms
satrn model: 480ms
export config:
mmdeploy-1.1.0\configs\mmocr\text-recognition\text-recognition_tensorrt_dynamic-32x32-32x640.py
ENV:
mmdeploy: main
mmocr: 1.0.0
mmdet: 3.0.0
tensort: 8.6.1.6
QA:
Can I optimize the textrecog model time to within 100ms?
Beta Was this translation helpful? Give feedback.
All reactions