You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I used 4-card A100 to train hunyuan model but cuda out of memory was displayed. I noticed that 8 cards were used in your source code. I would like to ask you:
Can 4-card A100 with a total memory of 80G train hunyuan model in lora mode?
Is the 40GB maximum memory mentioned in GitHub a single card or 8 cards overall memory?
Appreciate and look forward to your reply 😊