forked from intel/ipex-llm
-
Notifications
You must be signed in to change notification settings - Fork 0
Added new functions #11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
* add llama3.2 GPU example * change prompt format reference url * update * add Meta-Llama-3.2-1B-Instruct sample output * update wording
* use Qwen2-1.5B-Instruct in demo * update * add reference link * update * update
* [ADD] rewrite new vllm docker quick start * [ADD] lora adapter doc finished * [ADD] mulit lora adapter test successfully * [ADD] add ipex-llm quantization doc * [UPDATE] update mmdocs vllm_docker_quickstart content * [REMOVE] rm tmp file * [UPDATE] tp and pp explaination and readthedoc link change * [FIX] fix the error description of tp+pp and quantization part * [FIX] fix the table of verifed model * [UPDATE] add full low bit para list * [UPDATE] update the load_in_low_bit params to verifed dtype
* Release for LNL on Windows * Temp commit for release test * Change option name * Remove temp commit and change option name * temp commit for test again * Remove temp commit
* Update windows guide regarding LNL support * Update based on comments
* first commit to support load dll and init llm pipeline * add init generate * fix style * small updates * fix style and check tokens number
* add npu-level0 pipeline.dll to ipex-llm * test * update runner label * fix * update * fix * fix
* Update oneccl to 0.0.4 * upgrade transformers to 4.44.2
* update * fix style
* qwen2-vl readme * add qwen2-vl example * fix * fix * fix * add link * Update regarding modules_to_not_convert and readme * Further fix * Small fix --------- Co-authored-by: Yuwen Hu <[email protected]>
* Support release for ARL * Small fix * Small fix to doc * Temp for test * Remove temp commit for test
…#12173) * [ADD] rewrite new vllm docker quick start * [ADD] lora adapter doc finished * [ADD] mulit lora adapter test successfully * [ADD] add ipex-llm quantization doc * [Merge] rebase main * [REMOVE] rm tmp file * [Merge] rebase main * [ADD] add prefix caching experiment and result * [REMOVE] rm cpu offloading chapter
* Create benchmark_util_4_45.py * Update __init__.py * Update lint-python * Update benchmark_util_4_45.py * Update benchmark_util_4_45.py * Create benchmark_util_4_44.py
* Support cpp Windows release for ARL * Temp commit for test * Remove temp commit
* Add Llama 3.2 to iGPU Perf (#12200) * Add Llama 3.2 to iGPU Perf * Downgrade accelerate after step * Temporarily disable model for test * Temporarily change ERRORLEVEL check (#12201) * Restore llama3.2 perf (#12206) * Revert "Temporarily change ERRORLEVEL check" This reverts commit 909dbbc. * Revert "Temporarily disable model for test" This reverts commit 95322dc. --------- Co-authored-by: Jin, Qiao <[email protected]>
* Add ollama_quickstart.zh-CN.md Add ollama_quickstart.zh-CN.md * Update ollama_quickstart.zh-CN.md Add Chinese and English switching * Update ollama_quickstart.md Add Chinese and English switching * Update README.zh-CN.md Modify the related link to ollama_quickstart.zh-CN.md * Update ollama_quickstart.zh-CN.md Modified based on comments. * Update ollama_quickstart.zh-CN.md Modified based on comments
…_size=0` (#12282) * Initial support for quantized forward on CPU when quantization_group_size=0 * Style fix * Style fix * Small fix * Small fix
* support save & load, update llama examples * update baichuan2 example * update readme
* except lm_head * remove * support gw lm_head * update * fix * remove run.bat * fix style * support llama3 * slice -> split * remove debug * fix style * add dpu
* bugfix for qlora 100 step error * indent fix * annotation fix
* qwen2 gw performance opt * remove debug
* feat: change oneccl * fix: restore llama-70b * fix: remove tab * fix: remove extra blank * small fix * add comments * fix: add a blank space
bitsanbytes multi backend is now available and is required , otherwise would error out saying that no cuda is available
* support qwen pipeline * update error msg * style * meet review * minor
* new codegeex attn * use kv cache * add compress/quantize kv * remove compress/quantize kv * fix style check * fix style * fix codegeex
* fix graphrag quickstart * fix axolotl quickstart * fix ragflow quickstart * fix ragflow quickstart * fix graphrag toc * fix comments * fix comment * fix comments
* prefill use sdp * add param * update * fix style * fix style * meet comments
* update-llava-example * add warmup * small fix on llava example * remove space& extra print prompt * renew example * small fix --------- Co-authored-by: Jinhe Tang <[email protected]>
…nchmark api (#12316) * add `npu_group_size` for `transformers_int4_npu_win` small bugfix * update
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
1. Why the change?
2. User API changes
3. Summary of the change
4. How to test?
1234
). And paste your action link here once it has been successfully finished.5. New dependencies
- Dependency1
- Dependency2
- ...
- Dependency1 and license1
- Dependency2 and license2
- ...