Skip to content

Commit 7502796

Browse files
authored
Update README.md
add actual code link
1 parent 889c423 commit 7502796

File tree

1 file changed

+3
-5
lines changed

1 file changed

+3
-5
lines changed

README.md

+3-5
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
# MVDream
22
> Yichun Shi, Peng Wang, Jianglong Ye, Long Mai, Kejie Li, Xiao Yang
3-
>
4-
> We propose MVDream, a multi-view diffusion model that is able to generate geometrically consistent multi-view images from a given text prompt. By leveraging image diffusion models pre-trained on large-scale web datasets and a multi-view dataset rendered from 3D assets, the resulting multi-view diffusion model can achieve both the generalizability of 2D diffusion and the consistency of 3D data. Such a model can thus be applied as a multi-view prior for 3D generation via Score Distillation Sampling, where it greatly improves the stability of existing 2D-lifting methods by solving the 3D consistency problem. Finally, we show that the multi-view diffusion model can also be fine-tuned under a few shot setting for personalized 3D generation, i.e. DreamBooth3D application, where the consistency can be maintained after learning the subject identity.
53
6-
<a href="https://mv-dream.github.io/index.html"><img src="assets/architecture.jpg" width="600px"/></a>
7-
8-
This is a placeholder page for the code of [MVDream](https://mv-dream.github.io/index.html) paper. The final link could be different.
4+
### We have released our code in **two** different repos!
5+
- Multi-view Diffusion Model: https://github.com/bytedance/MVDream
6+
- SDS for 3D Generation (threestudio): https://github.com/bytedance/MVDream-threestudio

0 commit comments

Comments
 (0)