Skip to content

Commit 495402f

Browse files
authored
Merge branch 'main' into fix-run-pr-workflow
2 parents 5290c75 + 6131a93 commit 495402f

File tree

146 files changed

+9703
-1918
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

146 files changed

+9703
-1918
lines changed

.github/workflows/nightly_tests.yml

+58
Original file line numberDiff line numberDiff line change
@@ -347,6 +347,64 @@ jobs:
347347
pip install slack_sdk tabulate
348348
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
349349
350+
run_nightly_quantization_tests:
351+
name: Torch quantization nightly tests
352+
strategy:
353+
fail-fast: false
354+
max-parallel: 2
355+
matrix:
356+
config:
357+
- backend: "bitsandbytes"
358+
test_location: "bnb"
359+
runs-on:
360+
group: aws-g6e-xlarge-plus
361+
container:
362+
image: diffusers/diffusers-pytorch-cuda
363+
options: --shm-size "20gb" --ipc host --gpus 0
364+
steps:
365+
- name: Checkout diffusers
366+
uses: actions/checkout@v3
367+
with:
368+
fetch-depth: 2
369+
- name: NVIDIA-SMI
370+
run: nvidia-smi
371+
- name: Install dependencies
372+
run: |
373+
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
374+
python -m uv pip install -e [quality,test]
375+
python -m uv pip install -U ${{ matrix.config.backend }}
376+
python -m uv pip install pytest-reportlog
377+
- name: Environment
378+
run: |
379+
python utils/print_env.py
380+
- name: ${{ matrix.config.backend }} quantization tests on GPU
381+
env:
382+
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
383+
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
384+
CUBLAS_WORKSPACE_CONFIG: :16:8
385+
BIG_GPU_MEMORY: 40
386+
run: |
387+
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
388+
--make-reports=tests_${{ matrix.config.backend }}_torch_cuda \
389+
--report-log=tests_${{ matrix.config.backend }}_torch_cuda.log \
390+
tests/quantization/${{ matrix.config.test_location }}
391+
- name: Failure short reports
392+
if: ${{ failure() }}
393+
run: |
394+
cat reports/tests_${{ matrix.config.backend }}_torch_cuda_stats.txt
395+
cat reports/tests_${{ matrix.config.backend }}_torch_cuda_failures_short.txt
396+
- name: Test suite reports artifacts
397+
if: ${{ always() }}
398+
uses: actions/upload-artifact@v4
399+
with:
400+
name: torch_cuda_${{ matrix.config.backend }}_reports
401+
path: reports
402+
- name: Generate Report and Notify Channel
403+
if: always()
404+
run: |
405+
pip install slack_sdk tabulate
406+
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
407+
350408
# M1 runner currently not well supported
351409
# TODO: (Dhruv) add these back when we setup better testing for Apple Silicon
352410
# run_nightly_tests_apple_m1:

README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -112,9 +112,9 @@ Check out the [Quickstart](https://huggingface.co/docs/diffusers/quicktour) to l
112112
| **Documentation** | **What can I learn?** |
113113
|---------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
114114
| [Tutorial](https://huggingface.co/docs/diffusers/tutorials/tutorial_overview) | A basic crash course for learning how to use the library's most important features like using models and schedulers to build your own diffusion system, and training your own diffusion model. |
115-
| [Loading](https://huggingface.co/docs/diffusers/using-diffusers/loading_overview) | Guides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers. |
116-
| [Pipelines for inference](https://huggingface.co/docs/diffusers/using-diffusers/pipeline_overview) | Guides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library. |
117-
| [Optimization](https://huggingface.co/docs/diffusers/optimization/opt_overview) | Guides for how to optimize your diffusion model to run faster and consume less memory. |
115+
| [Loading](https://huggingface.co/docs/diffusers/using-diffusers/loading) | Guides for how to load and configure all the components (pipelines, models, and schedulers) of the library, as well as how to use different schedulers. |
116+
| [Pipelines for inference](https://huggingface.co/docs/diffusers/using-diffusers/overview_techniques) | Guides for how to use pipelines for different inference tasks, batched generation, controlling generated outputs and randomness, and how to contribute a pipeline to the library. |
117+
| [Optimization](https://huggingface.co/docs/diffusers/optimization/fp16) | Guides for how to optimize your diffusion model to run faster and consume less memory. |
118118
| [Training](https://huggingface.co/docs/diffusers/training/overview) | Guides for how to train a diffusion model for different tasks with different training techniques. |
119119
## Contribution
120120

docs/source/en/_toctree.yml

+2
Original file line numberDiff line numberDiff line change
@@ -314,6 +314,8 @@
314314
title: AutoencoderKLMochi
315315
- local: api/models/asymmetricautoencoderkl
316316
title: AsymmetricAutoencoderKL
317+
- local: api/models/autoencoder_dc
318+
title: AutoencoderDC
317319
- local: api/models/consistency_decoder_vae
318320
title: ConsistencyDecoderVAE
319321
- local: api/models/autoencoder_oobleck
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
<!-- Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# AutoencoderDC
13+
14+
The 2D Autoencoder model used in [SANA](https://huggingface.co/papers/2410.10629) and introduced in [DCAE](https://huggingface.co/papers/2410.10733) by authors Junyu Chen\*, Han Cai\*, Junsong Chen, Enze Xie, Shang Yang, Haotian Tang, Muyang Li, Yao Lu, Song Han from MIT HAN Lab.
15+
16+
The abstract from the paper is:
17+
18+
*We present Deep Compression Autoencoder (DC-AE), a new family of autoencoder models for accelerating high-resolution diffusion models. Existing autoencoder models have demonstrated impressive results at a moderate spatial compression ratio (e.g., 8x), but fail to maintain satisfactory reconstruction accuracy for high spatial compression ratios (e.g., 64x). We address this challenge by introducing two key techniques: (1) Residual Autoencoding, where we design our models to learn residuals based on the space-to-channel transformed features to alleviate the optimization difficulty of high spatial-compression autoencoders; (2) Decoupled High-Resolution Adaptation, an efficient decoupled three-phases training strategy for mitigating the generalization penalty of high spatial-compression autoencoders. With these designs, we improve the autoencoder's spatial compression ratio up to 128 while maintaining the reconstruction quality. Applying our DC-AE to latent diffusion models, we achieve significant speedup without accuracy drop. For example, on ImageNet 512x512, our DC-AE provides 19.1x inference speedup and 17.9x training speedup on H100 GPU for UViT-H while achieving a better FID, compared with the widely used SD-VAE-f8 autoencoder. Our code is available at [this https URL](https://github.com/mit-han-lab/efficientvit).*
19+
20+
The following DCAE models are released and supported in Diffusers.
21+
22+
| Diffusers format | Original format |
23+
|:----------------:|:---------------:|
24+
| [`mit-han-lab/dc-ae-f32c32-sana-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.0-diffusers) | [`mit-han-lab/dc-ae-f32c32-sana-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-sana-1.0)
25+
| [`mit-han-lab/dc-ae-f32c32-in-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-in-1.0-diffusers) | [`mit-han-lab/dc-ae-f32c32-in-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-in-1.0)
26+
| [`mit-han-lab/dc-ae-f32c32-mix-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-mix-1.0-diffusers) | [`mit-han-lab/dc-ae-f32c32-mix-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f32c32-mix-1.0)
27+
| [`mit-han-lab/dc-ae-f64c128-in-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f64c128-in-1.0-diffusers) | [`mit-han-lab/dc-ae-f64c128-in-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f64c128-in-1.0)
28+
| [`mit-han-lab/dc-ae-f64c128-mix-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f64c128-mix-1.0-diffusers) | [`mit-han-lab/dc-ae-f64c128-mix-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f64c128-mix-1.0)
29+
| [`mit-han-lab/dc-ae-f128c512-in-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-in-1.0-diffusers) | [`mit-han-lab/dc-ae-f128c512-in-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-in-1.0)
30+
| [`mit-han-lab/dc-ae-f128c512-mix-1.0-diffusers`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-mix-1.0-diffusers) | [`mit-han-lab/dc-ae-f128c512-mix-1.0`](https://huggingface.co/mit-han-lab/dc-ae-f128c512-mix-1.0)
31+
32+
Load a model in Diffusers format with [`~ModelMixin.from_pretrained`].
33+
34+
```python
35+
from diffusers import AutoencoderDC
36+
37+
ae = AutoencoderDC.from_pretrained("mit-han-lab/dc-ae-f32c32-sana-1.0-diffusers", torch_dtype=torch.float32).to("cuda")
38+
```
39+
40+
## AutoencoderDC
41+
42+
[[autodoc]] AutoencoderDC
43+
- encode
44+
- decode
45+
- all
46+
47+
## DecoderOutput
48+
49+
[[autodoc]] models.autoencoders.vae.DecoderOutput
50+

docs/source/en/api/pipelines/flux.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,7 @@ image.save("output.png")
148148
**Note:** `black-forest-labs/Flux.1-Depth-dev` is _not_ a ControlNet model. [`ControlNetModel`] models are a separate component from the UNet/Transformer whose residuals are added to the actual underlying model. Depth Control is an alternate architecture that achieves effectively the same results as a ControlNet model would, by using channel-wise concatenation with input control condition and ensuring the transformer learns structure control by following the condition as closely as possible.
149149

150150
```python
151-
# !pip install git+https://github.com/asomoza/image_gen_aux.git
151+
# !pip install git+https://github.com/huggingface/image_gen_aux
152152
import torch
153153
from diffusers import FluxControlPipeline, FluxTransformer2DModel
154154
from diffusers.utils import load_image

docs/source/en/api/pipelines/pag.md

+4
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,10 @@ Since RegEx is supported as a way for matching layer identifiers, it is crucial
9696
- all
9797
- __call__
9898

99+
## StableDiffusion3PAGImg2ImgPipeline
100+
[[autodoc]] StableDiffusion3PAGImg2ImgPipeline
101+
- all
102+
- __call__
99103

100104
## PixArtSigmaPAGPipeline
101105
[[autodoc]] PixArtSigmaPAGPipeline

docs/source/en/conceptual/evaluation.md

+12-6
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,7 @@ Then we load the [v1-5 checkpoint](https://huggingface.co/stable-diffusion-v1-5/
181181

182182
```python
183183
model_ckpt_1_5 = "stable-diffusion-v1-5/stable-diffusion-v1-5"
184-
sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device)
184+
sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=torch.float16).to("cuda")
185185

186186
images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="np").images
187187
```
@@ -280,7 +280,7 @@ from diffusers import StableDiffusionInstructPix2PixPipeline
280280

281281
instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained(
282282
"timbrooks/instruct-pix2pix", torch_dtype=torch.float16
283-
).to(device)
283+
).to("cuda")
284284
```
285285

286286
Now, we perform the edits:
@@ -326,9 +326,9 @@ from transformers import (
326326

327327
clip_id = "openai/clip-vit-large-patch14"
328328
tokenizer = CLIPTokenizer.from_pretrained(clip_id)
329-
text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to(device)
329+
text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to("cuda")
330330
image_processor = CLIPImageProcessor.from_pretrained(clip_id)
331-
image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device)
331+
image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to("cuda")
332332
```
333333

334334
Notice that we are using a particular CLIP checkpoint, i.e., `openai/clip-vit-large-patch14`. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the [documentation](https://huggingface.co/docs/transformers/model_doc/clip).
@@ -350,7 +350,7 @@ class DirectionalSimilarity(nn.Module):
350350

351351
def preprocess_image(self, image):
352352
image = self.image_processor(image, return_tensors="pt")["pixel_values"]
353-
return {"pixel_values": image.to(device)}
353+
return {"pixel_values": image.to("cuda")}
354354

355355
def tokenize_text(self, text):
356356
inputs = self.tokenizer(
@@ -360,7 +360,7 @@ class DirectionalSimilarity(nn.Module):
360360
truncation=True,
361361
return_tensors="pt",
362362
)
363-
return {"input_ids": inputs.input_ids.to(device)}
363+
return {"input_ids": inputs.input_ids.to("cuda")}
364364

365365
def encode_image(self, image):
366366
preprocessed_image = self.preprocess_image(image)
@@ -459,6 +459,7 @@ with ZipFile(local_filepath, "r") as zipper:
459459
```python
460460
from PIL import Image
461461
import os
462+
import numpy as np
462463

463464
dataset_path = "sample-imagenet-images"
464465
image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)])
@@ -477,6 +478,7 @@ Now that the images are loaded, let's apply some lightweight pre-processing on t
477478

478479
```python
479480
from torchvision.transforms import functional as F
481+
import torch
480482

481483

482484
def preprocess_image(image):
@@ -498,6 +500,10 @@ dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=
498500
dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config)
499501
dit_pipeline = dit_pipeline.to("cuda")
500502

503+
seed = 0
504+
generator = torch.manual_seed(seed)
505+
506+
501507
words = [
502508
"cassette player",
503509
"chainsaw",

0 commit comments

Comments
 (0)