Skip to content

Commit 32f2d23

Browse files
committed
Adding whatsnew document
Signed-off-by: Eric Kerfoot <[email protected]>
1 parent 4707053 commit 32f2d23

File tree

4 files changed

+57
-55
lines changed

4 files changed

+57
-55
lines changed

CHANGELOG.md

Lines changed: 0 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -7,38 +7,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
77

88
## [1.5.0] - 2025-06-06
99

10-
## Supported Dependency Versions
11-
12-
This release adds support for NumPy 2.0 and PyTorch 2.6. We plan to add support for PyTorch 2.7 in an upcoming version once some compatibility issues have been addressed.
13-
14-
As stated in the updated [README.md](./README.md) file, MONAI's policy for the support of dependency versions has been updated for clarity.
15-
16-
MONAI will continue to support [currently supported versions of Python](https://devguide.python.org/versions), and for other dependencies the following apply:
17-
18-
* Major releases of MONAI will have dependency versions stated for them. The current state of the `dev` branch in this repository is the unreleased development version of MONAI which typically will support current versions of dependencies and include updates and bug fixes to do so.
19-
* PyTorch support covers [the current version](https://github.com/pytorch/pytorch/releases) plus three previous minor versions. If compatibility issues with a PyTorch version and other dependencies arise, support for a version may be delayed until a major release.
20-
* Our support policy for other dependencies adheres for the most part to [SPEC0](https://scientific-python.org/specs/spec-0000), where dependency versions are supported where possible for up to two years. Discovered vulnerabilities or defects may require certain versions to be explicitly not supported.
21-
* See the `requirements*.txt` files for dependency version information.
22-
23-
## MAISI Update: Introducing MAISI Version maisi3d-rflow
24-
We are excited to announce the release of MAISI Version _maisi3d-rflow_. This update brings significant improvements over the previous version, _maisi3d-ddpm_, with a remarkable 33x acceleration in latent diffusion model inference speed. The MAISI VAE remains unchanged. Here are the key differences:
25-
1. Scheduler Update:
26-
27-
* _maisi3d-ddpm_: Uses the basic DDPM noise scheduler.
28-
29-
* _maisi3d-rflow_: Introduces the Rectified Flow scheduler, allowing diffusion model inference to be 33 times faster.
30-
2. Training Data Preparation:
31-
32-
* _maisi3d-ddpm_: Requires training images to be labeled with body regions (specifically “top_region_index” and “bottom_region_index”).
33-
34-
* _maisi3d-rflow_: No such labeling is required, making it easier to prepare the training data.
35-
3. Image Quality:
36-
37-
* For the released model weights, _maisi3d-rflow_ generates better-quality images for head regions and smaller output volumes compared to _maisi3d-ddpm_. For other regions, the image quality is comparable.
38-
4. Modality Input:
39-
40-
* _maisi3d-rflow_ adds a new modality input to the diffusion model, offering flexibility for future extensions to other modalities. Currently, this input is set to always equal 1, as this version supports CT generation exclusively.
41-
4210
## What's Changed
4311
### Added
4412
* Add platform-specific constraints to setup.cfg (#8260)
@@ -133,29 +101,6 @@ We are excited to announce the release of MAISI Version _maisi3d-rflow_. This up
133101
* selfattention block: Remove the fc linear layer if it is not used (#8325)
134102
* Removed outdated `torch` version checks from transform functions (#8359)
135103

136-
## New Contributors
137-
* @Smoothengineer made their first contribution in #8157
138-
* @Akhsuna07 made their first contribution in #8163
139-
* @bnbqq8 made their first contribution in #8177
140-
* @EloiNavet made their first contribution in #8189
141-
* @vectorvp made their first contribution in #8246
142-
* @zifuwanggg made their first contribution in #8138
143-
* @Jerome-Hsieh made their first contribution in #8216
144-
* @pooya-mohammadi made their first contribution in #8285
145-
* @advcu987 made their first contribution in #8286
146-
* @garciadias made their first contribution in #8231
147-
* @nkaenzig made their first contribution in #8347
148-
* @bartosz-grabowski made their first contribution in #8342
149-
* @thibaultdvx made their first contribution in #8089
150-
* @phisanti made their first contribution in #8312
151-
* @SimoneBendazzoli93 made their first contribution in #8329
152-
* @XwK-P made their first contribution in #8407
153-
* @slavaheroes made their first contribution in #8427
154-
* @kavin2003 made their first contribution in #8446
155-
* @chrislevn made their first contribution in #8402
156-
* @emmanuel-ferdman made their first contribution in #8449
157-
158-
159104
## [1.4.0] - 2024-10-17
160105
## What's Changed
161106
### Added

docs/images/maisi_infer.png

128 KB
Loading

docs/source/whatsnew.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ What's New
66
.. toctree::
77
:maxdepth: 1
88

9+
whatsnew_1_5.md
910
whatsnew_1_4.md
1011
whatsnew_1_3.md
1112
whatsnew_1_2.md

docs/source/whatsnew_1_5.md

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
2+
# What's new in 1.5 🎉🎉
3+
4+
- MAISI inference accelerate
5+
- Support numpy 2.x and Pytorch 2.6
6+
- bundles storage changed to huggingface and correspoinding api updated in core
7+
- Ported remaining generative tutorials and bundles
8+
- New tutorials:
9+
- [2d_regression/image_restoration.ipynb](https://github.com/Project-MONAI/tutorials/blob/main/2d_regression/image_restoration.ipynb)
10+
- [generation/2d_diffusion_autoencoder/2d_diffusion_autoencoder_tutorial.ipynb](https://github.com/Project-MONAI/tutorials/blob/main/generation/2d_diffusion_autoencoder/2d_diffusion_autoencoder_tutorial.ipynb)
11+
- [generation/3d_ddpm/3d_ddpm_tutorial.ipynb](https://github.com/Project-MONAI/tutorials/blob/main/generation/3d_ddpm/3d_ddpm_tutorial.ipynb)
12+
- [generation/classifier_free_guidance/2d_ddpm_classifier_free_guidance_tutorial.ipynb](https://github.com/Project-MONAI/tutorials/blob/main/generation/classifier_free_guidance/2d_ddpm_classifier_free_guidance_tutorial.ipynb)
13+
- [hugging_face/finetune_vista3d_for_hugging_face_pipeline.ipynb](https://github.com/Project-MONAI/tutorials/blob/main/hugging_face/finetune_vista3d_for_hugging_face_pipeline.ipynb)
14+
- [hugging_face/hugging_face_pipeline_for_monai.ipynb](https://github.com/Project-MONAI/tutorials/blob/main/hugging_face/hugging_face_pipeline_for_monai.ipynb)
15+
- [modules/omniverse/omniverse_integration.ipynb](https://github.com/Project-MONAI/tutorials/blob/main/modules/omniverse/omniverse_integration.ipynb)
16+
- New Bundles:
17+
- [models/cxr_image_synthesis_latent_diffusion_model](https://github.com/Project-MONAI/model-zoo/blob/dev/models/cxr_image_synthesis_latent_diffusion_model)
18+
- [models/mednist_ddpm](https://github.com/Project-MONAI/model-zoo/blob/dev/models/mednist_ddpm)
19+
- [models/brain_image_synthesis_latent_diffusion_model](https://github.com/Project-MONAI/model-zoo/blob/dev/models/mednist_ddpm)
20+
- [hf_models/exaonepath-crc-msi-predictor](https://github.com/Project-MONAI/model-zoo/blob/dev/hf_models/exaonepath-crc-msi-predictor)
21+
- All existing bundles are also now [hosted on Huggingface](https://huggingface.co/MONAI)!
22+
23+
## Supported Dependency Versions
24+
25+
This release adds support for NumPy 2.0 and PyTorch 2.6. We plan to add support for PyTorch 2.7 in an upcoming version once some compatibility issues have been addressed.
26+
27+
As stated in the updated [README.md](https://github.com/Project-MONAI/MONAI/blob/main/README.md) file, MONAI's policy for the support of dependency versions has been updated for clarity.
28+
29+
MONAI will continue to support [currently supported versions of Python](https://devguide.python.org/versions), and for other dependencies the following apply:
30+
31+
* Major releases of MONAI will have dependency versions stated for them. The current state of the `dev` branch in this repository is the unreleased development version of MONAI which typically will support current versions of dependencies and include updates and bug fixes to do so.
32+
* PyTorch support covers [the current version](https://github.com/pytorch/pytorch/releases) plus three previous minor versions. If compatibility issues with a PyTorch version and other dependencies arise, support for a version may be delayed until a major release.
33+
* Our support policy for other dependencies adheres for the most part to [SPEC0](https://scientific-python.org/specs/spec-0000), where dependency versions are supported where possible for up to two years. Discovered vulnerabilities or defects may require certain versions to be explicitly not supported.
34+
* See the `requirements*.txt` files for dependency version information.
35+
36+
## MAISI Update: Introducing MAISI Version maisi3d-rflow
37+
38+
![maisi](../images/maisi_infer.png)
39+
40+
We are excited to announce the release of MAISI Version _maisi3d-rflow_. This update brings significant improvements over the previous version, _maisi3d-ddpm_, with a remarkable 33x acceleration in latent diffusion model inference speed. The MAISI VAE remains unchanged. Here are the key differences:
41+
1. Scheduler Update:
42+
43+
* _maisi3d-ddpm_: Uses the basic DDPM noise scheduler.
44+
45+
* _maisi3d-rflow_: Introduces the Rectified Flow scheduler, allowing diffusion model inference to be 33 times faster.
46+
2. Training Data Preparation:
47+
48+
* _maisi3d-ddpm_: Requires training images to be labeled with body regions (specifically “top_region_index” and “bottom_region_index”).
49+
50+
* _maisi3d-rflow_: No such labeling is required, making it easier to prepare the training data.
51+
3. Image Quality:
52+
53+
* For the released model weights, _maisi3d-rflow_ generates better-quality images for head regions and smaller output volumes compared to _maisi3d-ddpm_. For other regions, the image quality is comparable.
54+
4. Modality Input:
55+
56+
* _maisi3d-rflow_ adds a new modality input to the diffusion model, offering flexibility for future extensions to other modalities. Currently, this input is set to always equal 1, as this version supports CT generation exclusively.

0 commit comments

Comments
 (0)