-
Notifications
You must be signed in to change notification settings - Fork 2.9k
diffusionmodules.make_attn: fall back to vanilla if xformers is not available #51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@benjaminaubin For what it's worth, I tested that this works on my mac (in a future branch that stacks this, #49, and other things so using |
Yes my bad your condition is valid if you use torch >= 2.0 |
@akx please resolve conflicts with the previous merged branch |
a8f54a1
to
60a5b25
Compare
@benjaminaubin rebased |
This is a rebase of Stability-AI#51.
This is a rebase of Stability-AI#51.
This is a rebase of Stability-AI#51.
This is a rebase of Stability-AI#51.
This is a rebase of Stability-AI#51.
…AI#51)" (Stability-AI#61) This reverts commit ef520df.
…AI#51)" (Stability-AI#61) This reverts commit ef520df.
The default YAML file for SDXL specifies
vanilla-xformers
, which won't work ifxformers
is not available. This adds fallback logic like that ingenerative-models/sgm/modules/attention.py
Lines 395 to 400 in e5dc966