-
Notifications
You must be signed in to change notification settings - Fork 123
ENH: Add IMPACT Similarity Metric to Elastix #1311
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
@vboussot Thank you very much for this great work. I just discussed with Marius (@mstaring) The use of deep features for image registration looks very interesting to us. There is a practical challenge however, how to maintain this new metric. It's a lot of extra code! And I see, it doesn't yet build on the CI, because it depends on Torch and CUDA. We are able to build your pull request locally on Windows 11, though! Just one code adjustment was necessary: Do you think the use of CUDA is essential, or would it also work on the CPU? (I see, https://pytorch.org/get-started/locally/ also offers LibTorch for the CPU.) Do you think it would be feasible to write a test that would run on the CI, for this pull request? |
Thank you for your interest in the project and for testing the PR on Windows. I’ve updated the code to use int64_t instead of long to ensure compatibility. However, I can’t confirm it fully resolves all Windows-related issues, feel free to share your patch, as I currently don’t have access to a Windows machine for testing. The metric now works with the CPU version of LibTorch. However, note that if the Elastix configuration file does not explicitly set GPU = -1, you may encounter the following error: "CUDA is not available. Please check your CUDA installation or run on a compatible device." As for CI integration, I’ve started setting up a test configuration using LibTorch-CPU. At this stage, the most straightforward and robust approach seems to be configuring Elastix with a user-provided LibTorch version, allowing users to choose between a CPU or CUDA build, depending on their machine and installed CUDA Valentin |
Hello, Just a quick update: The build is now working on both Windows and Ubuntu. Regarding the functional test for the IMPACT metric: Let me know if that sounds good, or if you’d prefer a different setup. Best, |
This comment was marked as resolved.
This comment was marked as resolved.
Thank you @N-Dekker for reporting the bug with such detailed output. The issue has been resolved in commit 41cebc2. The root cause was a type mismatch in a tensor passed to index_add_(). Specifically, nonZeroJacobianIndices was constructed from a std::vector unsigned long , which led to invalid values (e.g., 4e9, -8e18) on 32-bit platforms when converted to a tensor using torch::from_blob(...) with torch::kLong. An explicit cast to int64_t has been added. This issue only affected index tensors, other types such as double were unaffected. I also ran additional tests in virtualized environments (KVM/QEMU) and didn’t observe any issues. Finally, the CI has been updated to build LibTorch from source on macOS 13 to support the x86_64 architecture, since official binaries are only available for arm64. The Azure pipeline is now passing successfully. GitHub Actions validation remains to be tested. |
Very glad to see you fixed this crash so quickly, thanks! I didn't have a clue! But I see now, elastix defines
Maybe we should ban |
The previous version, which used torch::from_blob to directly convert a std::vector into a tensor, was indeed more efficient. However, I chose to use an explicit cast to int64_t instead of modifying NonZeroJacobianIndicesType, to avoid altering elastix internals and risking unintended side effects. That said, I support the idea of replacing long / unsigned long with fixed-width types like int64_t / uint64_t in both elastix and ITK. This would enhance portability between 32 and 64 bit architectures, with minimal risk of breaking existing functionality. Regarding the CTest error observed on GitHub Actions. Thanks for approving the tests, I believe it stems from two main issues: First, it's generally more robust to use ${TORCH_LIBRARIES} rather than hardcoding Torch in target_link_libraries(), as CMake targets may not always be correctly exported depending on how LibTorch was built. Second, building PyTorch from source requires Python ≥ 3.9 |
51f804b
to
00fd9ff
Compare
00fd9ff
to
1227e3d
Compare
Aims to ease interaction with LibTorch (specifically when passing such indices as `data` parameter to `torch::from_blob`). `unsigned long` appears problematic, as it is typically 32-bit on Windows, while it's typically 64-bit on Linux. Issue encountered during the development of the IMPACT Similarity Metric, pull request #1311 by Valentin Boussot et al.
patchSizeVec = GetVectorFromString<std::string>(modelsPathVec.size(), patchSizeStr, "5*5*5"); | ||
|
||
/** Get and set the voxel size. */ | ||
std::string voxelSizeStr; | ||
this->GetConfiguration()->ReadParameter(voxelSizeStr, type + "VoxelSize", this->GetComponentLabel(), level, 0); | ||
std::vector<std::string> voxelSizeVec = | ||
GetVectorFromString<std::string>(modelsPathVec.size(), voxelSizeStr, "1.5*1.5*1.5"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding PatchSize (default "5*5*5"
) and VoxelSize (default "1.5*1.5*1.5"
), the use of the syntax "i*j*k"
is a bit uncommon for elastix parameter values. We would rather have an array of three elements. For example in elastix parameter txt file:
(PatchSize 5 5 5)
(VoxelSize 1.5 1.5 1.5)
Internally, elastix would then store the VoxelSize values as a vector<string>
of three elements, being { "1.5", "1.5", "1.5" }
.
Would it be possible/feasible to you to adjust these parameters that way? Or would you like me to make it a pull request for https://github.com/vboussot/ImpactElastix/pulls ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the suggestion, I completely understand the preference for using arrays with space-separated values, as is typically done in elastix.
I had actually considered following the format used for example in FixedImagePyramidRescaleSchedule, but I ended up choosing a string format like "x * y * z" to avoid ambiguity with the multi-resolution syntax, which already uses space-separated values. I also find this format easier to parse consistently for parameters that expect 3D tuples.
That said, I’m of course happy to adapt it if you feel the standard array format would be better for long-term consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand your consideration while looking at FixedImagePyramidRescaleSchedule, but I already checked with Marius (@mstaring) and Stefan (@stefanklein), and they also preferred arrays with space-separated values for PatchSize and VoxelSize. Our only concern was that it would break backward compatibility with your experiments, and your paper (the table of hyperparameters). Would that be a problem to you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem at all, I understand the importance of staying consistent with the rest of elastix, and I’ll make the changes accordingly. It doesn’t cause any issue with the paper, I believe things are clear enough for readers, so it shouldn’t be a problem. I’ll also update the documentation to reflect the new format.
Aims to ease interaction with LibTorch (specifically when passing such indices as `data` parameter to `torch::from_blob`). `unsigned long` appears problematic, as it is typically 32-bit on Windows, while it's typically 64-bit on Linux. Issue encountered during the development of the IMPACT Similarity Metric, pull request #1311 by Valentin Boussot et al.
This pull request introduces the IMPACT similarity metric into Elastix, enabling semantic image registration via deep feature representations extracted from TorchScript models.
📚 Reference
This metric is described in the following preprint, currently under review:
⚙️ LibTorch Dependency
IMPACT relies on LibTorch for runtime feature extraction from TorchScript models.
To build Elastix with IMPACT, download and extract LibTorch (e.g., 2.6.0 + CUDA 12.6) and set the
Torch_DIR
CMake variable when configuring:📦 IMPACT Examples & Resources
A full working setup to test the IMPACT similarity metric is available in the following repository:
👉 https://github.com/vboussot/ImpactLoss
This includes:
🧠 Pretrained TorchScript models
Ready-to-use models exported from TotalSegmentator and other large-scale segmentation networks.
⚙️ Elastix configuration examples
Parameter files demonstrating how to use
Impact
🐳 Docker environment
A Dockerfile for building Elastix with LibTorch support, preconfigured to run the provided examples.