We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
I followed this example script for training gemma3-4b in GRPO
I used gemma3-4b, but no-vision model. So it is generally similar as gemma-3-1b-it.
gemma-3-1b-it
lmdeploy model which is not supported by turbomind doesn't have load_weights method. Therefore, below lines raised Exception related the method.
load_weights
ms-swift/swift/trainers/rlhf_trainer/grpo_trainer.py
Lines 558 to 565 in 9860d42
qwen2_5 based model works fine (since it was supported). But gemma3 not work properly.
qwen2_5
Your hardware and system info Write your system info like CUDA version/system/GPU/torch version here(在这里给出硬件信息和系统信息,如CUDA版本,系统,GPU型号和torch版本等)
Additional context Add any other context about the problem here(在这里补充其他信息)
The text was updated successfully, but these errors were encountered:
the same issue
Sorry, something went wrong.
The integration with LMDeploy currently only works with the turbomind backend, for non-TurboMind compatible models: plz use vLLM or pt backend
No branches or pull requests
Describe the bug
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
I followed this example script for training gemma3-4b in GRPO
I used gemma3-4b, but no-vision model. So it is generally similar as
gemma-3-1b-it
.lmdeploy model which is not supported by turbomind doesn't have
load_weights
method.Therefore, below lines raised Exception related the method.
ms-swift/swift/trainers/rlhf_trainer/grpo_trainer.py
Lines 558 to 565 in 9860d42
qwen2_5
based model works fine (since it was supported). But gemma3 not work properly.Your hardware and system info
Write your system info like CUDA version/system/GPU/torch version here(在这里给出硬件信息和系统信息,如CUDA版本,系统,GPU型号和torch版本等)
Additional context
Add any other context about the problem here(在这里补充其他信息)
The text was updated successfully, but these errors were encountered: