Official PyTorch implementation for our works on the topic of efficiently adapting the pre-trained Vision Foundational Models (VFM) on 3D Medical Image Segmentation task.
[1] "Tri-Plane Mamba: Efficiently Adapting Segment Anything Model for 3D Medical Images" (MICCAI 2024)
π§ [2024-10-22] Re-organize and Upload partial core codes.
We foucs on proposing more advanced adapters or training algorithms to adapt the pre-trained VFM (both natural and medical-specific models) on 3d medical image segmentation.
π₯ Data-Efficient: Use less data to achieve more competitive performance, such as semi-supervised, few-shot, zero-shot, and so on.
π₯ Parameter-Efficient: Enhance the representation by lightweight adapters, such as local-feature, global-feature, or other existing adapters.
π¨ TODO
π‘ Supported Adapters
Name | Type | Supported |
---|---|---|
Baseline (Frozen SAM) | None | βοΈ |
LoRA | pixel-independent | βοΈ |
SSF | pixel-independent | TODO |
multi-scale conv | local | βοΈ |
PPM | local | TODO |
Mamba | global | TODO |
Linear Attention | global | TODO |
π TODO
If you think our paper helps you, please feel free to cite it in your publications.
π TP-Mamba
@InProceedings{Wan_TriPlane_MICCAI2024,
author = { Wang, Hualiang and Lin, Yiqun and Ding, Xinpeng and Li, Xiaomeng},
title = { { Tri-Plane Mamba: Efficiently Adapting Segment Anything Model for 3D Medical Images } },
booktitle = {proceedings of Medical Image Computing and Computer Assisted Intervention -- MICCAI 2024},
year = {2024},
publisher = {Springer Nature Switzerland},
volume = {LNCS 15009},
month = {October},
page = {pending}
}
We sincerely appreciate these precious repositories πΊMONAI and πΊSAM.