Open
Description
Is your feature request related to a problem? Please describe.
Should be able to run on 24VRAM, so a 32GB X Elite would be great.
Details of model being requested
- Model name: DeepSeek-R1-0528-Qwen3-8B
- Source repo link: https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B/tree/main
- Model use case: On local Qualcomm X Elite CoPilot laptop run this model on NPU for coding tasks and mathematical / logic description solving.
Additional context for requested model
https://cdn-lfs-us-1.hf.co/repos/a5/4a/a54ac1da56919aa67ad0837a65d09070908eb895865648b2e8435d2336a07a16/60aba4b9eb561b56b877a9514ab205ba8a4ce516e4f678ec203e41c4527f40c9?response-content-disposition=inline%3B+filename*%3DUTF-8%27%27benchmark.png%3B+filename%3D%22benchmark.png%22%3B&response-content-type=image%2Fpng
As an be found of https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B might be even better than Gemini 2.5 Pro for this purpose, but running local.