Skip to content

Commit 41dd338

Browse files
a-r-r-o-wsayakpaul
andauthored
featured projects (#270)
* update * update * Apply suggestions from code review Co-authored-by: Sayak Paul <[email protected]> --------- Co-authored-by: Sayak Paul <[email protected]>
1 parent 61d14a7 commit 41dd338

File tree

2 files changed

+19
-3
lines changed

2 files changed

+19
-3
lines changed

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -168,8 +168,10 @@ cython_debug/
168168
wandb/
169169
*.txt
170170
dump*
171+
*dummy*
171172
outputs*
172173
*.slurm
173174
.vscode/
175+
*.json
174176

175177
!requirements.txt

README.md

Lines changed: 17 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,11 @@
11
# finetrainers 🧪
22

3-
`cogvideox-factory` was renamed to `finetrainers`. If you're looking to train CogVideoX or Mochi with the legacy training scripts, please refer to [this](./training/README.md) README instead. Everything in the `training/` directory will be eventually moved and supported under `finetrainers`.
4-
53
FineTrainers is a work-in-progress library to support (accessible) training of video models. Our first priority is to support LoRA training for all popular video models in [Diffusers](https://github.com/huggingface/diffusers), and eventually other methods like controlnets, control-loras, distillation, etc.
64

5+
> [!NOTE]
6+
>
7+
> `cogvideox-factory` was renamed to `finetrainers`. If you're looking to train CogVideoX or Mochi with the legacy training scripts, please refer to [this](./examples/_legacy/) README instead.
8+
79
<table align="center">
810
<tr>
911
<td align="center"><video src="https://github.com/user-attachments/assets/aad07161-87cb-4784-9e6b-16d06581e3e5">Your browser does not support the video tag.</video></td>
@@ -153,7 +155,19 @@ For inference, refer [here](./docs/training/ltx_video.md#inference). For docs re
153155

154156
If you would like to use a custom dataset, refer to the dataset preparation guide [here](./docs/dataset/README.md).
155157

158+
## Featured Projects 🔥
159+
160+
Checkout some amazing projects citing `finetrainers`:
161+
- [SkyworkAI's SkyReels-A1](https://github.com/SkyworkAI/SkyReels-A1)
162+
- [eisneim's LTX Image-to-Video](https://github.com/eisneim/ltx_lora_training_i2v_t2v/)
163+
- [wileewang's TransPixar](https://github.com/wileewang/TransPixar)
164+
- [Feizc's Video-In-Context](https://github.com/feizc/Video-In-Context)
165+
166+
Checkout the following UIs built for `finetrainers`:
167+
- [jbilcke's VideoModelStudio](https://github.com/jbilcke-hf/VideoModelStudio)
168+
- [neph1's finetrainers-ui](https://github.com/neph1/finetrainers-ui)
169+
156170
## Acknowledgements
157171

158172
* `finetrainers` builds on top of a body of great open-source libraries: `transformers`, `accelerate`, `peft`, `diffusers`, `bitsandbytes`, `torchao`, `deepspeed` -- to name a few.
159-
* Some of the design choices of `finetrainers` were inspired by [`SimpleTuner`](https://github.com/bghira/SimpleTuner).
173+
* Some of the design choices were inspired by [`SimpleTuner`](https://github.com/bghira/SimpleTuner).

0 commit comments

Comments
 (0)