-
Notifications
You must be signed in to change notification settings - Fork 28.3k
[Bug]: 1000+ LoRAs crash - "Too many file descriptors in select" #10001
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I have found that if I remove most of the LoRA files, open extra networks, and then add all of the LoRAs back and hit refresh. |
See if you can help with troubleshooting/testing this PR: |
Thanks for the ideas. That PR says its not working after the latest update. And I am not sure where to start with the python windows 64 sockets limit. Is that something I can change in the base Python config, or do I need to find the line in the webUI that's making the call and limit it there? There is something that changed in the latest webUI update this week that caused this. As I used to have about 1500 models with no errors at all, and now I get crashes when I have over 800. There doesn't seem to be any rhyme or reason to which models. Just a high number causes the UI to just crash when I open the extra networks. |
This has been resolved with the latest updates. |
Is there an existing issue for this?
What happened?
After updating to the latest WebUI, viewing my LoRA models crashes the console.
Steps to reproduce the problem
What should have happened?
It should have loaded the models.
Commit where the problem happens
72cd27a
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
I have disabled the extensions for troubleshooting purposes.
Console logs
Additional information
I have about 1300 LoRAs. They used to load just fine prior to updating.
I did make sure that Torch and Xformers got updated as well.
I also disabled all extensions.
The text was updated successfully, but these errors were encountered: