-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Open
Description
Hi
any channce you could update sgm/modules/diffusionmodules/model.py to use other Memory efficient attention function and not hard code XFormers.
From my brief play on Colab the one built into to PyTorch 2.0 upwards used by Diffusers is as good if not better than XFormers.
anyway and compatible with MPS.
Also can you update your requirements to use a newer tokenizers as tokenizers==0.12.1 is pretty much the only version the requires building from source on Apple Silicon computers (no wheel for arm64) .
It would be nice you can stop assuming everyone is on CUDA too, so we don't have to change every bit of example code to get rid of the hard coded 'cuda' devices. :-)
tmc and LinearFalcon
Metadata
Metadata
Assignees
Labels
No labels