Skip to content

增加flash attention选项,防止影响训练 #730

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Mar 10, 2024
Merged

增加flash attention选项,防止影响训练 #730

merged 2 commits into from
Mar 10, 2024

Conversation

ChasonJiang
Copy link
Contributor

优化

  • 增加flash attention选项,防止影响训练

	增加flash attention 选项:   GPT_SoVITS/AR/models/t2s_model.py
	增加flash attention 选项:   GPT_SoVITS/TTS_infer_pack/TTS.py
	增加flash attention 选项:   GPT_SoVITS/TTS_infer_pack/TextPreprocessor.py
	增加flash attention 选项:   GPT_SoVITS/configs/tts_infer.yaml
	增加flash attention 选项:   GPT_SoVITS/inference_webui.py
@RVC-Boss RVC-Boss merged commit a680939 into RVC-Boss:fast_inference_ Mar 10, 2024
@RVC-Boss
Copy link
Owner

todo:
1、和baseline不同的是,bert是所有字一起推。bert太长会炸,文字太短bert跑完再切效果有差异。
2、推理webui的初始化模型未继承主webui选项的模型

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants