Skip to content

[Enhance] support TensorRT engine for onnxruntime #1739

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Feb 20, 2023

Conversation

yhna940
Copy link
Contributor

@yhna940 yhna940 commented Feb 9, 2023

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily receiving feedbacks. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

With the TensorRT execution provider, the ONNX Runtime delivers better inferencing performance on the same hardware compared to generic GPU acceleration. The TensorRT execution provider in the ONNX Runtime makes use of NVIDIA’s TensorRT Deep Learning inferencing engine to accelerate ONNX model in their family of GPUs.

https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html

Modification

Please briefly describe what modification is made in this PR.

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
  3. If the modification has a dependency on downstream projects of a newer version, this PR should be tested with all supported versions of downstream projects.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

@CLAassistant
Copy link

CLAassistant commented Feb 9, 2023

CLA assistant check
All committers have signed the CLA.

@yhna940 yhna940 changed the title [Enhance] support trt engine for onnxruntime [Enhance] support TensorRT engine for onnxruntime Feb 9, 2023
@lvhan028 lvhan028 requested review from grimoire and lzhangzz February 9, 2023 07:50
@@ -36,7 +38,8 @@ class ORTWrapper(BaseWrapper):
def __init__(self,
onnx_file: str,
device: str,
output_names: Optional[Sequence[str]] = None):
output_names: Optional[Sequence[str]] = None,
enable_trt: bool = False):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible to check if TensorRT provider is available in the current environment?
If it is, we can check it inside init instead of add a flag.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

>>> available_providers = onnxruntime.get_available_providers()
>>> available_providers
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']

onnxruntime has a get_available_providers function that tells you which providers are available in the current environment. Changed to use TensorRT executor if available. thank you :)

@yhna940 yhna940 requested review from grimoire and removed request for lzhangzz February 13, 2023 10:08
if device == 'cpu':
providers.append('CPUExecutionProvider')
else:
providers.append(('CUDAExecutionProvider', {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I know, cpu provider can co-exist with cuda provider, ONNXRuntime will fall back to cpu if the cuda implementation of the op is not provided.
Cpu provider can be placed in the providers list even when we use gpu device. (I am not sure if the order of the provider matters)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the official documentation, the order of providers is the same as preference. Therefore, when the device is cuda, tensorrt and cuda executor are included, and the cpu executor is added at the end as the default value. (https://onnxruntime.ai/docs/execution-providers/)

@yhna940 yhna940 requested a review from grimoire February 14, 2023 04:07
Copy link
Member

@grimoire grimoire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lvhan028 lvhan028 requested a review from lzhangzz February 17, 2023 02:54
@grimoire grimoire merged commit fd47fa2 into open-mmlab:master Feb 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants