forked from intel/ipex-llm
-
Notifications
You must be signed in to change notification settings - Fork 0
New commits #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
New commits #1
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* Reduce Mistral softmax memory only in low memory mode
…est (#11778) * add yaml and modify `concat_csv.py` for `transformers` 4.43.1 (#11758) * add yaml and modify `concat_csv.py` for `transformers` 4.43.1 * remove 4.43 for arc; fix; * remove 4096-512 for 4.43 * comment some models * Small fix * uncomment models (#11777) --------- Co-authored-by: Ch1y0q <[email protected]>
* deepspeed zero3 QLoRA finetuning * Update convert.py * Update low_bit_linear.py * Update utils.py * Update qlora_finetune_llama2_13b_arch_2_card.sh * Update low_bit_linear.py * Update alpaca_qlora_finetuning.py * Update low_bit_linear.py * Update utils.py * Update convert.py * Update alpaca_qlora_finetuning.py * Update alpaca_qlora_finetuning.py * Update low_bit_linear.py * Update deepspeed_zero3.json * Update qlora_finetune_llama2_13b_arch_2_card.sh * Update low_bit_linear.py * Update low_bit_linear.py * Update utils.py * fix style * fix style * Update alpaca_qlora_finetuning.py * Update qlora_finetune_llama2_13b_arch_2_card.sh * Update convert.py * Update low_bit_linear.py * Update model.py * Update alpaca_qlora_finetuning.py * Update low_bit_linear.py * Update low_bit_linear.py * Update low_bit_linear.py
* Fix mistral forward_qkv without self.rotary_emb.base in q4_0. * Replace apply_rotary_pos_emb_no_cache_xpu with rotary_half_inplaced. * Revert #11765
* fix check error * fix other models * remove print
…11785) * clean up and support transpose value cache * refine * fix style * fix style
* added troubleshoot for sycl not found problem * added troubleshoot for sycl not found problem * revision on troubleshoot * revision on troubleshoot
* fix runtime error * revert workflow
* Further update prompt for continuation task, and disable lookup candidate update strategy on MTL * style fix
* Update llm_unit_tests.yml * remove debug information * Delete .github/actions/llm/cli-test-windows directory
* Fix performance tests * Small fix
Co-authored-by: Jinhe Tang <[email protected]>
* transformers==4.37 * added yi model * added yi model * xxxx * delete prompt template * / and delete
#11811) * update transformers version for `replit-code-v1-3b`, `internlm2-chat-7b` and mistral * remove for default transformers version
* fix: delete ipex extension import in ppl wikitext evaluation * feat: add mixed_precision argument on ppl wikitext evaluation * fix: delete mix_precision command in perplex evaluation for wikitext * fix: remove fp16 mixed-presicion argument * fix: Add a space. --------- Co-authored-by: Jinhe Tang <[email protected]>
* enable vllm load gptq model * update * update * update * update style
* add nv longbench * LongBench: NV code to ipex-llm * ammend * add more models support * ammend * optimize LongBench's user experience * ammend * ammend * fix typo * ammend * remove cuda related information & add a readme * add license to python scripts & polish the readme * ammend * ammend --------- Co-authored-by: cyita <[email protected]> Co-authored-by: ATMxsp01 <[email protected]> Co-authored-by: leonardozcm <[email protected]>
* add `transpose_value_cache` * update * update
* update vllm_online_benchmark script to support long input * update guide
* Add Qwen2.5 GPU example * fix end line * fix description
* update code * fix
* add internvl2 example * add to README.md * update * add link to zh-CN readme
* Add mix_precision argument to control whether use INT8 lm_head for Qwen2-7B-Instruct * Small fix * Fixed on load low bit with mixed precision * Small fix * Update example accordingly * Update for default prompt * Update base on comments * Final fix
* Add Qwen2.5 NPU Example * fix * Merge qwen2.py and qwen2.5.py into qwen.py * Fix description
* add minicpm3 gpu example * update GPU example * update --------- Co-authored-by: Huang, Xinshengzi <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
1. Why the change?
2. User API changes
3. Summary of the change
4. How to test?
1234
). And paste your action link here once it has been successfully finished.5. New dependencies
- Dependency1
- Dependency2
- ...
- Dependency1 and license1
- Dependency2 and license2
- ...