-
Notifications
You must be signed in to change notification settings - Fork 668
[Doc] Improve Jetson tutorial install doc #381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@AllentDan Pls take a look. Thx! 😄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
-DTENSORRT_DIR=/usr/src/tensorrt \ | ||
-DCUDNN_DIR=/etc/alternatives |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as I've forgotten the exact paths of tensorrt and cudnn, could you make sure they are right for TENSORRT_DIR
and CUDNN_DIR
on TX2 or Xavier?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I only have Jetson Nano 😢
So I modefied it to:
Install MMDeploy
Using git to clone MMDeploy source code.
git clone -b master https://github.com/open-mmlab/mmdeploy.git MMDeploy cd MMDeploy git submodule update --init --recursive
We need the path of TensorRT and cuDNN path for MMDeploy installation.
- for Jetson Nano:
export TENSORRT_DIR=/usr/src/tensorrt export CUDNN_DIR=/etc/alternativesBuild MMDeploy from source:
mkdir -p build && cd build cmake .. \ -DCMAKE_CXX_COMPILER=g++-7 \ -DMMDEPLOY_BUILD_SDK=ON \ -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON \ -DMMDEPLOY_TARGET_DEVICES="cuda;cpu" \ -DMMDEPLOY_TARGET_BACKENDS="trt" \ -DMMDEPLOY_CODEBASES=all \ -Dpplcv_DIR=${PPLCV_DIR}/cuda-build/install/lib/cmake/ppl \ -DTENSORRT_DIR=${TENSORRT_DIR} \ -DCUDNN_DIR=${CUDNN_DIR} make -j$(nproc) && make install
It is okay?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@PeterH0323 Are you sure your TENSORRT_DIR is correct? On my Jetson, TensorRT is detected at:
-- Found TensorRT headers at /usr/include/aarch64-linux-gnu
-- Found TensorRT libs at /usr/lib/aarch64-linux-gnu/libnvinfer.so;/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so
I set TENSORRT_DIR to /usr/include/aarch64-linux-gnu
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, @tehkillerbee.
I am using Jetson Nano. May I ask what is the type of your Jetson?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@PeterH0323 I have checked on both AGX Xavier and NX with JetPack 4.6. /usr/src/tensorrt does exist - but it does not contain headers or libs as far as I can see. When you build MMDeploy, does it list where TensorRT headers, libs has been detected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tehkillerbee Done, the script show it can find trt when I set it to /usr/src/tensorrt
, but thx for you reminding, I change both -DTENSORRT_DIR
and -DCUDNN_DIR
to /usr/include/aarch64-linux-gnu
😄
@lzhangzz Pls take a look. Thx! 😄 |
@PeterH0323 @AllentDan have you guys tested the SDK demo on Jetson Nano? We are seeing reports from the WeChat group that ppl.cv is reporting "missing suitable binary for exectuion on deivce" error. This maybe caused by the missing gencode for sm_53 in |
|
||
After we installed the Archiconda successfully and created the virtual env correctly. If the pip in the env does not work properly or throw `Illegal instruction (core dumped)`, we may consider re-install the pip manually, reinstalling the whole JetPack SDK is the last method we can try. | ||
We can create the virtual env `mmdeploy` using the command below. The version of python we got from the previous step. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The version of python we got from the previous step.
is not a sentence.
May use "Ensure the python version in the command is the same as the above"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
Meanwhile, I added JetPack version at the beginning of the doc
Note: The JetPack we use is
4.6
, and the default python version of it is3.6
``` | ||
sudo apt-get install libssl-dev | ||
Then install it from source, MMDeploy is using mmcv version is `1.4.0`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May use "Since MMDeploy is using mmcv 1.4.0
, you can install it from source as below:"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
source ~/.bashrc | ||
``` | ||
|
||
If steps below don't work, check if you are using any mirror, if you did, try this: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
below -> above?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
||
This tutorial introduces how to install mmdeploy on Nvidia Jetson systems. It mainly introduces the installation of mmdeploy on three Jetson series boards: | ||
In this chapter, we introduce how to install mmdeploy on NVIDIA Jetson platforms, especially on three main Jetson series boards: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe NVIDIA have never announced these models as "main Jetson boards".
maybe "which we have verifed on the following models" is better
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
* Improve Jetson build doc * add torchvision in the doc * Fix lint * Fix lint * Fix lint * Fix arg bug * remove incorrect process * Improve doc * Add more detail on `Conda` * Add python version detail * Install `onnx` instead of `onnxruntime` * Fix gramma * Fix gramma * Update Installation detail and fix some doc detail * Update how_to_install_mmdeploy_on_jetsons.md * Fix tensorrt and cudnn path * Improve FAQ * Improve FAQs * pplcv not switch branch since the `sm_53` missing * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve doc * Update how_to_install_mmdeploy_on_jetsons.md * export `TENSORRT_DIR` * Using pre-build cmake to update * Improve sentence and add jetpack version * Improve sentence * move TENSORRT_DIR in the `Make TensorRT env` step * Improve CUDA detail * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve conda installation * Improve TensorRT installation * Fix lint * Add pip crash detail and FAQ * Improve pip crash * refine the jetson installation guide * Improve python version * Improve doc, added some detail * Fix lint * Add detail for `Runtime` problem * Fix word * Update how_to_install_mmdeploy_on_jetsons.md Co-authored-by: lvhan028 <[email protected]> (cherry picked from commit f45c1f0)
* fix pose demo and windows build (#307) * add postprocessing_masks gpu version (#276) * add postprocessing_masks gpu version * default device cpu * pre-commit fix Co-authored-by: hadoop-basecv <[email protected]> * fixed a bug causes text-recognizer to fail when (non-NULL) empty bboxes list is passed (#310) * [Fix] include missing <type_traits> for formatter.h (#313) * fix formatter * relax GCC version requirement * [Fix] MMEditing cannot save results when testing (#336) * fix show * lint * remove redundant codes * resolve comment * type hint * docs(build): fix typo (#352) * docs(build): add missing build option * docs(build): add onnx install * style(doc): trim whitespace * docs(build): revert install onnx * docs(build): add ncnn LD_LIBRARY_PATH * docs(build): fix path error * fix openvino export tmp model, add binary flag (#353) * init circleci (#348) * fix wrong input mat type (#362) * fix wrong input mat type * fix lint * fix(docs): remove redundant doc tree (#360) * fix missing ncnn_DIR & InferenceEngine_DIR (#364) * Fix mmdet openvino dynamic 300x300 cfg base (#372) * Fix: add onnxruntime building option in gpu dockerfile (#366) * Tutorial 03: torch2onnx (#365) * upload doc * add images * resolve comments * update translation * [Docs] fix ncnn docs (#378) * fix ncnn docs` * update 0216 * typo-fix (#397) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir (#357) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir * Update FindTENSORRT.cmake * Fix docs (#398) * ort_net ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL (#383) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion (#363) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion * fix check * fix build error * remove unused header * fix benchmark (#411) * Add `sm_53` in cuda.cmake for Jetson Nano which will cashe when process sdk predict. (#407) * [Fix] fix feature test for `std::source_location` (#416) * fix feature test for `std::source_location` * suppress msvc warnings * fix consistency * fix format string (#417) * [Fix] Fix seg name (#394) * fix seg name * use default name Co-authored-by: dongchunyu.vendor <[email protected]> * 【Docs】Add ipython notebook tutorial (#234) * add ipynb file * rename file * add open in colab tag * fix lint and add img show * fix open in colab link * fix comments * fix pre-commit config * fix mmpose api (#396) * fix mmpose api * use fmt::format instead * fix potential nullptr access * [Fix] support latest spdlog (#423) * support formatting `PixelFormat` & `DataType` * format enum for legacy spdlog * fix format * fix pillarencode (#331) * fix ONNXRuntime cuda test bug (#438) * Fix ci in master branch (#441) * [Doc] Improve Jetson tutorial install doc (#381) * Improve Jetson build doc * add torchvision in the doc * Fix lint * Fix lint * Fix lint * Fix arg bug * remove incorrect process * Improve doc * Add more detail on `Conda` * Add python version detail * Install `onnx` instead of `onnxruntime` * Fix gramma * Fix gramma * Update Installation detail and fix some doc detail * Update how_to_install_mmdeploy_on_jetsons.md * Fix tensorrt and cudnn path * Improve FAQ * Improve FAQs * pplcv not switch branch since the `sm_53` missing * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve doc * Update how_to_install_mmdeploy_on_jetsons.md * export `TENSORRT_DIR` * Using pre-build cmake to update * Improve sentence and add jetpack version * Improve sentence * move TENSORRT_DIR in the `Make TensorRT env` step * Improve CUDA detail * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve conda installation * Improve TensorRT installation * Fix lint * Add pip crash detail and FAQ * Improve pip crash * refine the jetson installation guide * Improve python version * Improve doc, added some detail * Fix lint * Add detail for `Runtime` problem * Fix word * Update how_to_install_mmdeploy_on_jetsons.md Co-authored-by: lvhan028 <[email protected]> * Version comments added, torch install steps added. (#449) * [Docs] Fix API documentation (#443) * [Docs] Fix API documentation * add onnx dependency in readthedocs.txt * fix dependencies * [Fix] Fix display bugs for windows (#451) * fix issue 330 for windows * fix code * fix lint * fix all platform * [Docs] Minor fixes and translation of installation tutorial for Jetson (#415) * minor fixes * add Jetson installation * updated zh_cn based on new en version * If a cuda launch error occurs, verify if cuda device requires top_k t… (#479) * If a cuda launch error occurs, verify if cuda device requires top_k to be reduced. * Fixed lint * Clang format * Fixed lint, clang-format * [Fix] set optional arg a default value (#483) * optional default value * resolve comments Co-authored-by: dongchunyu.vendor <[email protected]> * Update: Optimize document (#484) * Update: Optimize document - Minor fixes in styling and grammar - Add support for Jetson Xavier NX (Tested and worked) - Add hardware recommendation - Change JetPack installation guide URL from jp5.0 to jp4.6.1 - Add a note to select "Jetson SDK Components" when using NVIDIA SDK Manager - Change PyTorch wheel save location - Add more dependencies needed for torchvision installation. Otherwise installation error - Simplify torchvision git cloning branch - Add installation times for torchvision, MMCV, versioned-hdf5, ppl.cv, model converter, SDK libraries - Delete "snap" from cmake removal as "apt-get purge" is enough - Add a note on which scenarios you need to append cu da path and libraries to PATH and LD_LIBRARY_PATH - Simplify MMCV git cloning branch - Delete "skip if you don't need MMDeploy C/C++ Inference SDK", because that is the only available inference SDK at the moment - Add more details to object detection demo using C/C++ Inference SDK such as installing MMDetection and converting a model - Add image of inference result - Delete "set env for pip" in troubleshooting because this is already mentioned under "installing Archiconda" Signed-off-by: Lakshantha Dissanayake <[email protected]> * Fix: note style on doc * Fix: Trim trailing whitespaces * Update: add source image before inference * fix: bbox_nms not onnxizing if batch size > 1 (#501) A typo prevents nms from onnxizing correctly if batch size is static and greater than 1. * change seperator of function marker (#499) * [docs] Fix typo in tutorial (#509) * Fix docstring format (#495) * Fix doc common * Fix bugs * Tutorial 04: onnx custom op (#508) * Add tutorial04 * lint * add image * resolve comment * fix mmseg twice resize (#480) * fix mmseg twich resize * remove comment * Fix mask test with mismatched device (#511) * align mask output to cpu device * align ncnn ssd output to torch.Tensor type * --amend * compat mmpose v0.26 (#518) * [Docs] adding new backends when using MMDeploy as a third package (#482) * update doc * refine expression * cn doc * Tutorial 05: ONNX Model Editing (#517) * tutorial 05 * Upload image * resolve comments * resolve comment * fix pspnet torchscript conversion (#538) * fix pspnet torchscript conversion * resolve comment * add IR to rewrite * changing the onnxwrapper script for gpu issue (#532) * changing the onnxwrapper script * gpu_issue * Update wrapper.py * Update wrapper.py * Update runtime.txt * Update runtime.txt * Update wrapper.py Co-authored-by: Chen Xin <[email protected]> Co-authored-by: Shengxi Li <[email protected]> Co-authored-by: hadoop-basecv <[email protected]> Co-authored-by: lzhangzz <[email protected]> Co-authored-by: Yifan Zhou <[email protected]> Co-authored-by: tpoisonooo <[email protected]> Co-authored-by: HinGwenWoong <[email protected]> Co-authored-by: Junjie <[email protected]> Co-authored-by: hanrui1sensetime <[email protected]> Co-authored-by: q.yao <[email protected]> Co-authored-by: Song Lin <[email protected]> Co-authored-by: zly19540609 <[email protected]> Co-authored-by: RunningLeon <[email protected]> Co-authored-by: HinGwenWoong <[email protected]> Co-authored-by: AllentDan <[email protected]> Co-authored-by: dongchunyu.vendor <[email protected]> Co-authored-by: VVsssssk <[email protected]> Co-authored-by: NagatoYuki0943 <[email protected]> Co-authored-by: Johannes L <[email protected]> Co-authored-by: Zaida Zhou <[email protected]> Co-authored-by: chaoqun <[email protected]> Co-authored-by: Lakshantha Dissanayake <[email protected]> Co-authored-by: Yifan Gu <[email protected]> Co-authored-by: Zhiqiang Wang <[email protected]> Co-authored-by: sanjaypavo <[email protected]>
* fix pose demo and windows build (#307) * add postprocessing_masks gpu version (#276) * add postprocessing_masks gpu version * default device cpu * pre-commit fix Co-authored-by: hadoop-basecv <[email protected]> * fixed a bug causes text-recognizer to fail when (non-NULL) empty bboxes list is passed (#310) * [Fix] include missing <type_traits> for formatter.h (#313) * fix formatter * relax GCC version requirement * [Fix] MMEditing cannot save results when testing (#336) * fix show * lint * remove redundant codes * resolve comment * type hint * docs(build): fix typo (#352) * docs(build): add missing build option * docs(build): add onnx install * style(doc): trim whitespace * docs(build): revert install onnx * docs(build): add ncnn LD_LIBRARY_PATH * docs(build): fix path error * fix openvino export tmp model, add binary flag (#353) * init circleci (#348) * fix wrong input mat type (#362) * fix wrong input mat type * fix lint * fix(docs): remove redundant doc tree (#360) * fix missing ncnn_DIR & InferenceEngine_DIR (#364) * Fix mmdet openvino dynamic 300x300 cfg base (#372) * Fix: add onnxruntime building option in gpu dockerfile (#366) * Tutorial 03: torch2onnx (#365) * upload doc * add images * resolve comments * update translation * [Docs] fix ncnn docs (#378) * fix ncnn docs` * update 0216 * typo-fix (#397) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir (#357) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir * Update FindTENSORRT.cmake * Fix docs (#398) * ort_net ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL (#383) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion (#363) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion * fix check * fix build error * remove unused header * fix benchmark (#411) * Add `sm_53` in cuda.cmake for Jetson Nano which will cashe when process sdk predict. (#407) * [Fix] fix feature test for `std::source_location` (#416) * fix feature test for `std::source_location` * suppress msvc warnings * fix consistency * fix format string (#417) * [Fix] Fix seg name (#394) * fix seg name * use default name Co-authored-by: dongchunyu.vendor <[email protected]> * 【Docs】Add ipython notebook tutorial (#234) * add ipynb file * rename file * add open in colab tag * fix lint and add img show * fix open in colab link * fix comments * fix pre-commit config * fix mmpose api (#396) * fix mmpose api * use fmt::format instead * fix potential nullptr access * [Fix] support latest spdlog (#423) * support formatting `PixelFormat` & `DataType` * format enum for legacy spdlog * fix format * fix pillarencode (#331) * fix ONNXRuntime cuda test bug (#438) * Fix ci in master branch (#441) * [Doc] Improve Jetson tutorial install doc (#381) * Improve Jetson build doc * add torchvision in the doc * Fix lint * Fix lint * Fix lint * Fix arg bug * remove incorrect process * Improve doc * Add more detail on `Conda` * Add python version detail * Install `onnx` instead of `onnxruntime` * Fix gramma * Fix gramma * Update Installation detail and fix some doc detail * Update how_to_install_mmdeploy_on_jetsons.md * Fix tensorrt and cudnn path * Improve FAQ * Improve FAQs * pplcv not switch branch since the `sm_53` missing * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve doc * Update how_to_install_mmdeploy_on_jetsons.md * export `TENSORRT_DIR` * Using pre-build cmake to update * Improve sentence and add jetpack version * Improve sentence * move TENSORRT_DIR in the `Make TensorRT env` step * Improve CUDA detail * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve conda installation * Improve TensorRT installation * Fix lint * Add pip crash detail and FAQ * Improve pip crash * refine the jetson installation guide * Improve python version * Improve doc, added some detail * Fix lint * Add detail for `Runtime` problem * Fix word * Update how_to_install_mmdeploy_on_jetsons.md Co-authored-by: lvhan028 <[email protected]> * Version comments added, torch install steps added. (#449) * [Docs] Fix API documentation (#443) * [Docs] Fix API documentation * add onnx dependency in readthedocs.txt * fix dependencies * [Fix] Fix display bugs for windows (#451) * fix issue 330 for windows * fix code * fix lint * fix all platform * [Docs] Minor fixes and translation of installation tutorial for Jetson (#415) * minor fixes * add Jetson installation * updated zh_cn based on new en version * If a cuda launch error occurs, verify if cuda device requires top_k t… (#479) * If a cuda launch error occurs, verify if cuda device requires top_k to be reduced. * Fixed lint * Clang format * Fixed lint, clang-format * [Fix] set optional arg a default value (#483) * optional default value * resolve comments Co-authored-by: dongchunyu.vendor <[email protected]> * Update: Optimize document (#484) * Update: Optimize document - Minor fixes in styling and grammar - Add support for Jetson Xavier NX (Tested and worked) - Add hardware recommendation - Change JetPack installation guide URL from jp5.0 to jp4.6.1 - Add a note to select "Jetson SDK Components" when using NVIDIA SDK Manager - Change PyTorch wheel save location - Add more dependencies needed for torchvision installation. Otherwise installation error - Simplify torchvision git cloning branch - Add installation times for torchvision, MMCV, versioned-hdf5, ppl.cv, model converter, SDK libraries - Delete "snap" from cmake removal as "apt-get purge" is enough - Add a note on which scenarios you need to append cu da path and libraries to PATH and LD_LIBRARY_PATH - Simplify MMCV git cloning branch - Delete "skip if you don't need MMDeploy C/C++ Inference SDK", because that is the only available inference SDK at the moment - Add more details to object detection demo using C/C++ Inference SDK such as installing MMDetection and converting a model - Add image of inference result - Delete "set env for pip" in troubleshooting because this is already mentioned under "installing Archiconda" Signed-off-by: Lakshantha Dissanayake <[email protected]> * Fix: note style on doc * Fix: Trim trailing whitespaces * Update: add source image before inference * fix: bbox_nms not onnxizing if batch size > 1 (#501) A typo prevents nms from onnxizing correctly if batch size is static and greater than 1. * change seperator of function marker (#499) * [docs] Fix typo in tutorial (#509) * Fix docstring format (#495) * Fix doc common * Fix bugs * Tutorial 04: onnx custom op (#508) * Add tutorial04 * lint * add image * resolve comment * fix mmseg twice resize (#480) * fix mmseg twich resize * remove comment * Fix mask test with mismatched device (#511) * align mask output to cpu device * align ncnn ssd output to torch.Tensor type * --amend * compat mmpose v0.26 (#518) * [Docs] adding new backends when using MMDeploy as a third package (#482) * update doc * refine expression * cn doc * Tutorial 05: ONNX Model Editing (#517) * tutorial 05 * Upload image * resolve comments * resolve comment * fix pspnet torchscript conversion (#538) * fix pspnet torchscript conversion * resolve comment * add IR to rewrite * changing the onnxwrapper script for gpu issue (#532) * changing the onnxwrapper script * gpu_issue * Update wrapper.py * Update wrapper.py * Update runtime.txt * Update runtime.txt * Update wrapper.py Co-authored-by: Chen Xin <[email protected]> Co-authored-by: Shengxi Li <[email protected]> Co-authored-by: hadoop-basecv <[email protected]> Co-authored-by: lzhangzz <[email protected]> Co-authored-by: Yifan Zhou <[email protected]> Co-authored-by: tpoisonooo <[email protected]> Co-authored-by: HinGwenWoong <[email protected]> Co-authored-by: Junjie <[email protected]> Co-authored-by: hanrui1sensetime <[email protected]> Co-authored-by: q.yao <[email protected]> Co-authored-by: Song Lin <[email protected]> Co-authored-by: zly19540609 <[email protected]> Co-authored-by: RunningLeon <[email protected]> Co-authored-by: HinGwenWoong <[email protected]> Co-authored-by: AllentDan <[email protected]> Co-authored-by: dongchunyu.vendor <[email protected]> Co-authored-by: VVsssssk <[email protected]> Co-authored-by: NagatoYuki0943 <[email protected]> Co-authored-by: Johannes L <[email protected]> Co-authored-by: Zaida Zhou <[email protected]> Co-authored-by: chaoqun <[email protected]> Co-authored-by: Lakshantha Dissanayake <[email protected]> Co-authored-by: Yifan Gu <[email protected]> Co-authored-by: Zhiqiang Wang <[email protected]> Co-authored-by: sanjaypavo <[email protected]>
* fix pose demo and windows build (#307) * add postprocessing_masks gpu version (#276) * add postprocessing_masks gpu version * default device cpu * pre-commit fix Co-authored-by: hadoop-basecv <[email protected]> * fixed a bug causes text-recognizer to fail when (non-NULL) empty bboxes list is passed (#310) * [Fix] include missing <type_traits> for formatter.h (#313) * fix formatter * relax GCC version requirement * [Fix] MMEditing cannot save results when testing (#336) * fix show * lint * remove redundant codes * resolve comment * type hint * docs(build): fix typo (#352) * docs(build): add missing build option * docs(build): add onnx install * style(doc): trim whitespace * docs(build): revert install onnx * docs(build): add ncnn LD_LIBRARY_PATH * docs(build): fix path error * fix openvino export tmp model, add binary flag (#353) * init circleci (#348) * fix wrong input mat type (#362) * fix wrong input mat type * fix lint * fix(docs): remove redundant doc tree (#360) * fix missing ncnn_DIR & InferenceEngine_DIR (#364) * Fix mmdet openvino dynamic 300x300 cfg base (#372) * Fix: add onnxruntime building option in gpu dockerfile (#366) * Tutorial 03: torch2onnx (#365) * upload doc * add images * resolve comments * update translation * [Docs] fix ncnn docs (#378) * fix ncnn docs` * update 0216 * typo-fix (#397) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir (#357) * add CUDA_TOOKIT_ROOT_DIR as tensorrt detect dir * Update FindTENSORRT.cmake * Fix docs (#398) * ort_net ONNX_TENSOR_ELEMENT_DATA_TYPE_BOOL (#383) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion (#363) * fix wrong buffer which will case onnxruntime-gpu crash with segmentaion * fix check * fix build error * remove unused header * fix benchmark (#411) * Add `sm_53` in cuda.cmake for Jetson Nano which will cashe when process sdk predict. (#407) * [Fix] fix feature test for `std::source_location` (#416) * fix feature test for `std::source_location` * suppress msvc warnings * fix consistency * fix format string (#417) * [Fix] Fix seg name (#394) * fix seg name * use default name Co-authored-by: dongchunyu.vendor <[email protected]> * 【Docs】Add ipython notebook tutorial (#234) * add ipynb file * rename file * add open in colab tag * fix lint and add img show * fix open in colab link * fix comments * fix pre-commit config * fix mmpose api (#396) * fix mmpose api * use fmt::format instead * fix potential nullptr access * [Fix] support latest spdlog (#423) * support formatting `PixelFormat` & `DataType` * format enum for legacy spdlog * fix format * fix pillarencode (#331) * fix ONNXRuntime cuda test bug (#438) * Fix ci in master branch (#441) * [Doc] Improve Jetson tutorial install doc (#381) * Improve Jetson build doc * add torchvision in the doc * Fix lint * Fix lint * Fix lint * Fix arg bug * remove incorrect process * Improve doc * Add more detail on `Conda` * Add python version detail * Install `onnx` instead of `onnxruntime` * Fix gramma * Fix gramma * Update Installation detail and fix some doc detail * Update how_to_install_mmdeploy_on_jetsons.md * Fix tensorrt and cudnn path * Improve FAQ * Improve FAQs * pplcv not switch branch since the `sm_53` missing * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve doc * Update how_to_install_mmdeploy_on_jetsons.md * export `TENSORRT_DIR` * Using pre-build cmake to update * Improve sentence and add jetpack version * Improve sentence * move TENSORRT_DIR in the `Make TensorRT env` step * Improve CUDA detail * Update how_to_install_mmdeploy_on_jetsons.md * Update how_to_install_mmdeploy_on_jetsons.md * Improve conda installation * Improve TensorRT installation * Fix lint * Add pip crash detail and FAQ * Improve pip crash * refine the jetson installation guide * Improve python version * Improve doc, added some detail * Fix lint * Add detail for `Runtime` problem * Fix word * Update how_to_install_mmdeploy_on_jetsons.md Co-authored-by: lvhan028 <[email protected]> * Version comments added, torch install steps added. (#449) * [Docs] Fix API documentation (#443) * [Docs] Fix API documentation * add onnx dependency in readthedocs.txt * fix dependencies * [Fix] Fix display bugs for windows (#451) * fix issue 330 for windows * fix code * fix lint * fix all platform * [Docs] Minor fixes and translation of installation tutorial for Jetson (#415) * minor fixes * add Jetson installation * updated zh_cn based on new en version * If a cuda launch error occurs, verify if cuda device requires top_k t… (#479) * If a cuda launch error occurs, verify if cuda device requires top_k to be reduced. * Fixed lint * Clang format * Fixed lint, clang-format * [Fix] set optional arg a default value (#483) * optional default value * resolve comments Co-authored-by: dongchunyu.vendor <[email protected]> * Update: Optimize document (#484) * Update: Optimize document - Minor fixes in styling and grammar - Add support for Jetson Xavier NX (Tested and worked) - Add hardware recommendation - Change JetPack installation guide URL from jp5.0 to jp4.6.1 - Add a note to select "Jetson SDK Components" when using NVIDIA SDK Manager - Change PyTorch wheel save location - Add more dependencies needed for torchvision installation. Otherwise installation error - Simplify torchvision git cloning branch - Add installation times for torchvision, MMCV, versioned-hdf5, ppl.cv, model converter, SDK libraries - Delete "snap" from cmake removal as "apt-get purge" is enough - Add a note on which scenarios you need to append cu da path and libraries to PATH and LD_LIBRARY_PATH - Simplify MMCV git cloning branch - Delete "skip if you don't need MMDeploy C/C++ Inference SDK", because that is the only available inference SDK at the moment - Add more details to object detection demo using C/C++ Inference SDK such as installing MMDetection and converting a model - Add image of inference result - Delete "set env for pip" in troubleshooting because this is already mentioned under "installing Archiconda" Signed-off-by: Lakshantha Dissanayake <[email protected]> * Fix: note style on doc * Fix: Trim trailing whitespaces * Update: add source image before inference * fix: bbox_nms not onnxizing if batch size > 1 (#501) A typo prevents nms from onnxizing correctly if batch size is static and greater than 1. * change seperator of function marker (#499) * [docs] Fix typo in tutorial (#509) * Fix docstring format (#495) * Fix doc common * Fix bugs * Tutorial 04: onnx custom op (#508) * Add tutorial04 * lint * add image * resolve comment * fix mmseg twice resize (#480) * fix mmseg twich resize * remove comment * Fix mask test with mismatched device (#511) * align mask output to cpu device * align ncnn ssd output to torch.Tensor type * --amend * compat mmpose v0.26 (#518) * [Docs] adding new backends when using MMDeploy as a third package (#482) * update doc * refine expression * cn doc * Tutorial 05: ONNX Model Editing (#517) * tutorial 05 * Upload image * resolve comments * resolve comment * fix pspnet torchscript conversion (#538) * fix pspnet torchscript conversion * resolve comment * add IR to rewrite * changing the onnxwrapper script for gpu issue (#532) * changing the onnxwrapper script * gpu_issue * Update wrapper.py * Update wrapper.py * Update runtime.txt * Update runtime.txt * Update wrapper.py Co-authored-by: Chen Xin <[email protected]> Co-authored-by: Shengxi Li <[email protected]> Co-authored-by: hadoop-basecv <[email protected]> Co-authored-by: lzhangzz <[email protected]> Co-authored-by: Yifan Zhou <[email protected]> Co-authored-by: tpoisonooo <[email protected]> Co-authored-by: HinGwenWoong <[email protected]> Co-authored-by: Junjie <[email protected]> Co-authored-by: hanrui1sensetime <[email protected]> Co-authored-by: q.yao <[email protected]> Co-authored-by: Song Lin <[email protected]> Co-authored-by: zly19540609 <[email protected]> Co-authored-by: RunningLeon <[email protected]> Co-authored-by: HinGwenWoong <[email protected]> Co-authored-by: AllentDan <[email protected]> Co-authored-by: dongchunyu.vendor <[email protected]> Co-authored-by: VVsssssk <[email protected]> Co-authored-by: NagatoYuki0943 <[email protected]> Co-authored-by: Johannes L <[email protected]> Co-authored-by: Zaida Zhou <[email protected]> Co-authored-by: chaoqun <[email protected]> Co-authored-by: Lakshantha Dissanayake <[email protected]> Co-authored-by: Yifan Gu <[email protected]> Co-authored-by: Zhiqiang Wang <[email protected]> Co-authored-by: sanjaypavo <[email protected]>
Motivation
Improve mmdeploy build on Jetson doc:
docs/en/tutorials/how_to_install_mmdeploy_on_jetsons.md
Modification
I follow the doc to build mmdeploy on Jetson nano (4G version, Jetpack 4.6.1), and I add some solution or details when I can't get it from the doc.
Still need to do
~/archiconda
and show the installation detailChecklist