Skip to content

New commits #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 239 commits into from
Sep 27, 2024
Merged

New commits #1

merged 239 commits into from
Sep 27, 2024

Conversation

SANKHA1
Copy link
Owner

@SANKHA1 SANKHA1 commented Sep 24, 2024

Description

1. Why the change?

2. User API changes

3. Summary of the change

4. How to test?

  • N/A
  • Unit test: Please manually trigger the PR Validation here by inputting the PR number (e.g., 1234). And paste your action link here once it has been successfully finished.
  • Application test
  • Document test
  • ...

5. New dependencies

  • New Python dependencies
    - Dependency1
    - Dependency2
    - ...
  • New Java/Scala dependencies and their license
    - Dependency1 and license1
    - Dependency2 and license2
    - ...

MeouSker77 and others added 30 commits August 13, 2024 09:51
* Reduce Mistral softmax memory only in low memory mode
…est (#11778)

* add yaml and modify `concat_csv.py` for `transformers` 4.43.1 (#11758)

* add yaml and modify `concat_csv.py` for `transformers` 4.43.1

* remove 4.43 for arc; fix;

* remove 4096-512 for 4.43

* comment some models

* Small fix

* uncomment models (#11777)

---------

Co-authored-by: Ch1y0q <[email protected]>
* deepspeed zero3 QLoRA finetuning

* Update convert.py

* Update low_bit_linear.py

* Update utils.py

* Update qlora_finetune_llama2_13b_arch_2_card.sh

* Update low_bit_linear.py

* Update alpaca_qlora_finetuning.py

* Update low_bit_linear.py

* Update utils.py

* Update convert.py

* Update alpaca_qlora_finetuning.py

* Update alpaca_qlora_finetuning.py

* Update low_bit_linear.py

* Update deepspeed_zero3.json

* Update qlora_finetune_llama2_13b_arch_2_card.sh

* Update low_bit_linear.py

* Update low_bit_linear.py

* Update utils.py

* fix style

* fix style

* Update alpaca_qlora_finetuning.py

* Update qlora_finetune_llama2_13b_arch_2_card.sh

* Update convert.py

* Update low_bit_linear.py

* Update model.py

* Update alpaca_qlora_finetuning.py

* Update low_bit_linear.py

* Update low_bit_linear.py

* Update low_bit_linear.py
* Fix mistral forward_qkv without self.rotary_emb.base in q4_0.
* Replace apply_rotary_pos_emb_no_cache_xpu with rotary_half_inplaced.
* Revert #11765
* fix check error

* fix other models

* remove print
…11785)

* clean up and support transpose value cache

* refine

* fix style

* fix style
* added troubleshoot for sycl not found problem

* added troubleshoot for sycl not found problem

* revision on troubleshoot

* revision on troubleshoot
* fix runtime error

* revert workflow
* Further update prompt for continuation task, and disable lookup candidate update strategy on MTL

* style fix
* Update llm_unit_tests.yml

* remove debug information

* Delete .github/actions/llm/cli-test-windows directory
* Fix performance tests

* Small fix
* transformers==4.37

* added yi model

* added yi model

* xxxx

* delete prompt template

* / and delete
#11811)

* update transformers version for `replit-code-v1-3b`, `internlm2-chat-7b` and mistral

* remove for default transformers version
* fix: delete ipex extension import in ppl wikitext evaluation

* feat: add mixed_precision argument on ppl wikitext evaluation

* fix: delete mix_precision command in perplex evaluation for wikitext

* fix: remove fp16 mixed-presicion argument

* fix: Add a space.

---------

Co-authored-by: Jinhe Tang <[email protected]>
xiangyuT and others added 29 commits September 18, 2024 14:29
* enable vllm load gptq model

* update

* update

* update

* update style
* add nv longbench

* LongBench: NV code to ipex-llm

* ammend

* add more models support

* ammend

* optimize LongBench's user experience

* ammend

* ammend

* fix typo

* ammend

* remove cuda related information & add a readme

* add license to python scripts & polish the readme

* ammend

* ammend

---------

Co-authored-by: cyita <[email protected]>
Co-authored-by: ATMxsp01 <[email protected]>
Co-authored-by: leonardozcm <[email protected]>
* add `transpose_value_cache`

* update

* update
* update vllm_online_benchmark script to support long input

* update guide
* Add Qwen2.5 GPU example

* fix end line

* fix description
* add internvl2 example

* add to README.md

* update

* add link to zh-CN readme
* Add mix_precision argument to control whether use INT8 lm_head for Qwen2-7B-Instruct

* Small fix

* Fixed on load low bit with mixed precision

* Small fix

* Update example accordingly

* Update for default prompt

* Update base on comments

* Final fix
* Add Qwen2.5 NPU Example

* fix

* Merge qwen2.py and qwen2.5.py into qwen.py

* Fix description
* add minicpm3 gpu example

* update GPU example

* update

---------

Co-authored-by: Huang, Xinshengzi <[email protected]>
@SANKHA1 SANKHA1 merged commit 36df4d9 into SANKHA1:main Sep 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.